SlideShare a Scribd company logo
Front cover


End-to-End Scheduling
with IBM Tivoli Workload
                   kload
Scheduler V 8.2
Plan and implement your end-to-end
scheduling environment

Experiment with real-life
scenarios

Learn best practices and
troubleshooting




                                                             Vasfi Gucer
                                                        Michael A. Lowry
                                                   Finn Bastrup Knudsen




ibm.com/redbooks
End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624
International Technical Support Organization

End-to-End Scheduling with IBM Tivoli Workload
Scheduler V 8.2

September 2004




                                               SG24-6624-00
Note: Before using this information and the product it supports, read the information in
 “Notices” on page ix.




First Edition (September 2004)

This edition applies to IBM Tivoli Workload Scheduler Version 8.2, IBM Tivoli Workload Scheduler
for z/OS Version 8.2.

© Copyright International Business Machines Corporation 2004. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP
Schedule Contract with IBM Corp.
Contents

                 Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
                 Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

                 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
                 The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
                 Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
                 Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
                 Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

                 Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
                 1.1 Job scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
                 1.2 Introduction to end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
                 1.3 Introduction to Tivoli Workload Scheduler for z/OS. . . . . . . . . . . . . . . . . . . 4
                    1.3.1 Overview of Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . . . 4
                    1.3.2 Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 4
                 1.4 Introduction to Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                    1.4.1 Overview of IBM Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . 5
                    1.4.2 IBM Tivoli Workload Scheduler architecture . . . . . . . . . . . . . . . . . . . . 6
                 1.5 Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli
                      Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
                 1.6 Summary of enhancements in V8.2 related to end-to-end scheduling . . . . 8
                    1.6.1 New functions related with performance and scalability . . . . . . . . . . . 8
                    1.6.2 General enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
                    1.6.3 Security enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
                 1.7 The terminology used in this book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

                 Chapter 2. End-to-end scheduling architecture . . . . . . . . . . . . . . . . . . . . . 25
                 2.1 IBM Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 27
                    2.1.1 Tivoli Workload Scheduler for z/OS configuration. . . . . . . . . . . . . . . 28
                    2.1.2 Tivoli Workload Scheduler for z/OS database objects . . . . . . . . . . . 32
                    2.1.3 Tivoli Workload Scheduler for z/OS plans. . . . . . . . . . . . . . . . . . . . . 37
                    2.1.4 Other Tivoli Workload Scheduler for z/OS features . . . . . . . . . . . . . 44
                 2.2 Tivoli Workload Scheduler architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
                    2.2.1 The IBM Tivoli Workload Scheduler network . . . . . . . . . . . . . . . . . . 51
                    2.2.2 Tivoli Workload Scheduler workstation types . . . . . . . . . . . . . . . . . . 54
                    2.2.3 Tivoli Workload Scheduler topology . . . . . . . . . . . . . . . . . . . . . . . . . 56
                    2.2.4 IBM Tivoli Workload Scheduler components . . . . . . . . . . . . . . . . . . 57
                    2.2.5 IBM Tivoli Workload Scheduler plan . . . . . . . . . . . . . . . . . . . . . . . . . 58
                 2.3 End-to-end scheduling architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59


© Copyright IBM Corp. 2004. All rights reserved.                                                                                        iii
2.3.1 How end-to-end scheduling works . . . . . . . . . . . . . . . . . . . . . . . . . . 60
                  2.3.2 Tivoli Workload Scheduler for z/OS end-to-end components . . . . . . 62
                  2.3.3 Tivoli Workload Scheduler for z/OS end-to-end configuration . . . . . 68
                  2.3.4 Tivoli Workload Scheduler for z/OS end-to-end plans . . . . . . . . . . . 75
                  2.3.5 Making the end-to-end scheduling system fault tolerant. . . . . . . . . . 84
                  2.3.6 Benefits of end-to-end scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . 86
               2.4 Job Scheduling Console and related components . . . . . . . . . . . . . . . . . . 89
                  2.4.1 A brief introduction to the Tivoli Management Framework . . . . . . . . 90
                  2.4.2 Job Scheduling Services (JSS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
                  2.4.3 Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
               2.5 Job log retrieval in an end-to-end environment . . . . . . . . . . . . . . . . . . . . . 98
                  2.5.1 Job log retrieval via the Tivoli Workload Scheduler connector . . . . . 98
                  2.5.2 Job log retrieval via the OPC connector . . . . . . . . . . . . . . . . . . . . . . 99
                  2.5.3 Job log retrieval when firewalls are involved. . . . . . . . . . . . . . . . . . 101
               2.6 Tivoli Workload Scheduler, important files, and directory structure . . . . 103
               2.7 conman commands in the end-to-end environment . . . . . . . . . . . . . . . . 106

               Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler
                           8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
               3.1 Different ways to do end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . 111
               3.2 The rationale behind end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . 112
               3.3 Before you start the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
                  3.3.1 How to order the Tivoli Workload Scheduler software . . . . . . . . . . 114
                  3.3.2 Where to find more information for planning . . . . . . . . . . . . . . . . . . 116
               3.4 Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS116
                  3.4.1 Tivoli Workload Scheduler for z/OS documentation . . . . . . . . . . . . 117
                  3.4.2 Service updates (PSP bucket, APARs, and PTFs) . . . . . . . . . . . . . 117
                  3.4.3 Tivoli Workload Scheduler for z/OS started tasks for end-to-end
                         scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
                  3.4.4 Hierarchical File System (HFS) cluster . . . . . . . . . . . . . . . . . . . . . . 124
                  3.4.5 Data sets related to end-to-end scheduling . . . . . . . . . . . . . . . . . . 127
                  3.4.6 TCP/IP considerations for end-to-end server in sysplex . . . . . . . . . 129
                  3.4.7 Upgrading from Tivoli Workload Scheduler for z/OS 8.1 end-to-end
                         scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
               3.5 Planning for end-to-end scheduling with Tivoli Workload Scheduler . . . 139
                  3.5.1 Tivoli Workload Scheduler publications and documentation. . . . . . 139
                  3.5.2 Tivoli Workload Scheduler service updates (fix packs) . . . . . . . . . . 140
                  3.5.3 System and software requirements. . . . . . . . . . . . . . . . . . . . . . . . . 140
                  3.5.4 Network planning and considerations . . . . . . . . . . . . . . . . . . . . . . . 141
                  3.5.5 Backup domain manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
                  3.5.6 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
                  3.5.7 Fault-tolerant agent (FTA) naming conventions . . . . . . . . . . . . . . . 146
               3.6 Planning for the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . 149



iv   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
3.6.1 Job Scheduling Console documentation. . . . . . . . . . . . . . . . . . . . . 150
   3.6.2 Job Scheduling Console service (fix packs) . . . . . . . . . . . . . . . . . . 150
   3.6.3 Compatibility and migration considerations for the JSC . . . . . . . . . 151
   3.6.4 Planning for Job Scheduling Console availability . . . . . . . . . . . . . . 153
   3.6.5 Planning for server started task for JSC communication . . . . . . . . 154
3.7 Planning for migration or upgrade from previous versions . . . . . . . . . . . 155
3.8 Planning for maintenance or upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . 156

Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end
             scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.1 Before the installation is started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.2 Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling . . 159
   4.2.1 Executing EQQJOBS installation aid . . . . . . . . . . . . . . . . . . . . . . . 162
   4.2.2 Defining Tivoli Workload Scheduler for z/OS subsystems . . . . . . . 167
   4.2.3 Allocate end-to-end data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
   4.2.4 Create and customize the work directory . . . . . . . . . . . . . . . . . . . . 170
   4.2.5 Create started task procedures for Tivoli Workload Scheduler for z/OS
          173
   4.2.6 Initialization statements for Tivoli Workload Scheduler for z/OS
          end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
   4.2.7 Initialization statements used to describe the topology. . . . . . . . . . 184
   4.2.8 Example of DOMREC and CPUREC definitions. . . . . . . . . . . . . . . 197
   4.2.9 The JTOPTS TWSJOBNAME() parameter . . . . . . . . . . . . . . . . . . . 200
   4.2.10 Verify end-to-end installation in Tivoli Workload Scheduler for z/OS .
          203
4.3 Installing Tivoli Workload Scheduler in an end-to-end environment . . . . 207
   4.3.1 Installing multiple instances of Tivoli Workload Scheduler on one
          machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
   4.3.2 Verify the Tivoli Workload Scheduler installation . . . . . . . . . . . . . . 211
4.4 Define, activate, verify fault-tolerant workstations . . . . . . . . . . . . . . . . . . 211
   4.4.1 Define fault-tolerant workstation in Tivoli Workload Scheduler controller
          workstation database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
   4.4.2 Activate the fault-tolerant workstation definition . . . . . . . . . . . . . . . 213
   4.4.3 Verify that the fault-tolerant workstations are active and linked . . . 214
4.5 Creating fault-tolerant workstation job definitions and job streams . . . . . 217
   4.5.1 Centralized and non-centralized scripts . . . . . . . . . . . . . . . . . . . . . 217
   4.5.2 Definition of centralized scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
   4.5.3 Definition of non-centralized scripts . . . . . . . . . . . . . . . . . . . . . . . . 221
   4.5.4 Combination of centralized script and VARSUB, JOBREC parameters
          232
   4.5.5 Definition of FTW jobs and job streams in the controller. . . . . . . . . 234
4.6 Verification test of end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . 235
   4.6.1 Verification of job with centralized script definitions . . . . . . . . . . . . 236



                                                                                             Contents        v
4.6.2 Verification of job with non-centralized scripts . . . . . . . . . . . . . . . . 239
                  4.6.3 Verification of centralized script with JOBREC parameters . . . . . . 242
               4.7 Activate support for the Tivoli Workload Scheduler Job Scheduling Console
                    245
                  4.7.1 Install and start Tivoli Workload Scheduler for z/OS JSC server . . 246
                  4.7.2 Installing and configuring Tivoli Management Framework 4.1 . . . . 252
                  4.7.3 Alternate method using Tivoli Management Framework 3.7.1 . . . . 253
                  4.7.4 Creating connector instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
                  4.7.5 Creating WTMF administrators for Tivoli Workload Scheduler . . . . 257
                  4.7.6 Installing the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . 261

               Chapter 5. End-to-end implementation scenarios and examples. . . . . . 265
               5.1 Description of our environment and systems . . . . . . . . . . . . . . . . . . . . . 266
               5.2 Creation of the Symphony file in detail . . . . . . . . . . . . . . . . . . . . . . . . . . 273
               5.3 Migrating Tivoli OPC tracker agents to end-to-end scheduling . . . . . . . . 274
                  5.3.1 Migration benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
                  5.3.2 Migration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
                  5.3.3 Migration checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
                  5.3.4 Migration actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
                  5.3.5 Migrating backward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
               5.4 Conversion from Tivoli Workload Scheduler network to Tivoli Workload
                    Scheduler for z/OS managed network . . . . . . . . . . . . . . . . . . . . . . . . . . 288
                  5.4.1 Illustration of the conversion process . . . . . . . . . . . . . . . . . . . . . . . 289
                  5.4.2 Considerations before doing the conversion. . . . . . . . . . . . . . . . . . 291
                  5.4.3 Conversion process from Tivoli Workload Scheduler to Tivoli Workload
                         Scheduler for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
                  5.4.4 Some guidelines to automate the conversion process . . . . . . . . . . 299
               5.5 Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios . . . . 303
                  5.5.1 Configure Tivoli Workload Scheduler for z/OS backup engines . . . 304
                  5.5.2 Configure DVIPA for Tivoli Workload Scheduler for z/OS end-to-end
                         server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
                  5.5.3 Configure backup domain manager for first-level domain manager 306
                  5.5.4 Switch to Tivoli Workload Scheduler backup domain manager . . . 308
                  5.5.5 Implementing Tivoli Workload Scheduler high availability on high
                         availability environments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
               5.6 Backup and maintenance guidelines for FTAs . . . . . . . . . . . . . . . . . . . . 318
                  5.6.1 Backup of the Tivoli Workload Scheduler FTAs . . . . . . . . . . . . . . . 319
                  5.6.2 Stdlist files on Tivoli Workload Scheduler FTAs . . . . . . . . . . . . . . . 319
                  5.6.3 Auditing log files on Tivoli Workload Scheduler FTAs. . . . . . . . . . . 321
                  5.6.4 Monitoring file systems on Tivoli Workload Scheduler FTAs . . . . . 321
                  5.6.5 Central repositories for important Tivoli Workload Scheduler files . 322
               5.7 Security on fault-tolerant agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
                  5.7.1 The security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325



vi   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
5.7.2 Sample security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
5.8 End-to-end scheduling tips and tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
   5.8.1 File dependencies in the end-to-end environment . . . . . . . . . . . . . 331
   5.8.2 Handling offline or unlinked workstations . . . . . . . . . . . . . . . . . . . . 332
   5.8.3 Using dummy jobs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
   5.8.4 Placing job scripts in the same directories on FTAs . . . . . . . . . . . . 334
   5.8.5 Common errors for jobs on fault-tolerant workstations . . . . . . . . . . 334
   5.8.6 Problems with port numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
   5.8.7 Cannot switch to new Symphony file (EQQPT52E) messages. . . . 340

Appendix A. Connector reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Setting the Tivoli environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Authorization roles required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Working with Tivoli Workload Scheduler for z/OS connector instances . . . . . 344
   The wopcconn command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Working with Tivoli Workload Scheduler connector instances . . . . . . . . . . . . 346
   The wtwsconn.sh command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Useful Tivoli Framework commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353




                                                                                                  Contents         vii
viii   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions
are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES
THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.



© Copyright IBM Corp. 2004. All rights reserved.                                                            ix
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:

    AIX®                                NetView®                              ServicePac®
    AS/400®                             OS/390®                               Tivoli®
    HACMP™                              OS/400®                               Tivoli Enterprise Console®
    IBM®                                RACF®                                 TME®
    Language Environment®               Redbooks™                             VTAM®
    Maestro™                            Redbooks (logo)       ™               z/OS®
    MVS™                                S/390®                                zSeries®

The following terms are trademarks of other companies:

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun
Microsystems, Inc. in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.

Intel is a trademark of Intel Corporation in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, and service names may be trademarks or service marks of others.




x     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Preface

                 The beginning of the new century sees the data center with a mix of work,
                 hardware, and operating systems previously undreamed of. Today’s challenge is
                 to manage disparate systems with minimal effort and maximum reliability. People
                 experienced in scheduling traditional host-based batch work must now manage
                 distributed systems, and those working in the distributed environment must take
                 responsibility for work running on the corporate OS/390® system.

                 This IBM® Redbook considers how best to provide end-to-end scheduling using
                 IBM Tivoli® Workload Scheduler Version 8.2, both distributed (previously known
                 as Maestro™) and mainframe (previously known as OPC) components.

                 In this book, we provide the information for installing the necessary Tivoli
                 Workload Scheduler software components and configuring them to communicate
                 with each other. In addition to technical information, we consider various
                 scenarios that may be encountered in the enterprise and suggest practical
                 solutions. We describe how to manage work and dependencies across both
                 environments using a single point of control.

                 We believe that this redbook will be a valuable reference for IT specialists who
                 implement end-to-end scheduling with Tivoli Workload Scheduler 8.2.



The team that wrote this redbook
                 This redbook was produced by a team of specialists from around the world
                 working at the International Technical Support Organization, Austin Center.

                 Vasfi Gucer is a Project Leader at the International Technical Support
                 Organization, Austin Center. He worked for IBM Turkey for 10 years and has
                 been with the ITSO since January 1999. He has more than 10 years of
                 experience in the areas of systems management, and networking hardware and
                 software on mainframe and distributed platforms. He has worked on various
                 Tivoli customer projects as a Systems Architect in Turkey and the United States.
                 Vasfi is also a IBM Certified Senior IT Specialist.

                 Michael A. Lowry is an IBM Certified Consultant and Instructor currently
                 working for IBM in Stockholm, Sweden. Michael does support, consulting, and
                 training for IBM customers, primarily in Europe. He has 10 years of experience in
                 the IT services business and has worked for IBM since 1996. Michael studied
                 engineering and biology at the University of Texas in Austin, his hometown.



© Copyright IBM Corp. 2004. All rights reserved.                                                    xi
Before moving to Sweden, he worked in Austin for Apple, IBM, and the IBM Tivoli
                Workload Scheduler Support Team at Tivoli Systems. He has five years of
                experience with Tivoli Workload Scheduler and has extensive experience with
                IBM network and storage management products. He is also an IBM Certified
                AIX® Support Professional.

                Finn Bastrup Knudsen is an Advisory IT Specialist in Integrated Technology
                Services (ITS) in IBM Global Services in Copenhagen, Denmark. He has 12
                years of experience working with IBM Tivoli Workload Scheduler for z/OS®
                (OPC) and four years of experience working with IBM Tivoli Workload Scheduler.
                Finn primarily does consultation and services at customer sites, as well as IBM
                Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler training.
                He is a certified Tivoli Instructor in IBM Tivoli Workload Scheduler for z/OS and
                IBM Tivoli Workload Scheduler. He has worked at IBM for 13 years. His areas of
                expertise include IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli
                Workload Scheduler.

                Also thanks to the following people for their contributions to this project:

                International Technical Support Organization, Austin Center
                Budi Darmawan and Betsy Thaggard

                IBM Italy
                Angelo D'ambrosio, Paolo Falsi, Antonio Gallotti, Pietro Iannucci, Valeria
                Perticara

                IBM USA
                Robert Haimowitz, Stephen Viola

                IBM Germany
                Stefan Franke



Notice
                This publication is intended to help Tivoli specialists implement an end-to-end
                scheduling environment with IBM Tivoli Workload Scheduler 8.2. The information
                in this publication is not intended as the specification of any programming
                interfaces that are provided by Tivoli Workload Scheduler 8.2. See the
                PUBLICATIONS section of the IBM Programming Announcement for Tivoli
                Workload Scheduler 8.2 for more information about what publications are
                considered to be product documentation.




xii   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Become a published author
        Join us for a two- to six-week residency program! Help write an IBM Redbook
        dealing with specific products or solutions, while getting hands-on experience
        with leading-edge technologies. You will team with IBM technical professionals,
        Business Partners, and/or customers.

        Your efforts will help increase product acceptance and customer satisfaction. As
        a bonus, you will develop a network of contacts in IBM development labs, and
        increase your productivity and marketability.

        Find out more about the residency program, browse the residency index, and
        apply online at:
              ibm.com/redbooks/residencies.html



Comments welcome
        Your comments are important to us. We want our Redbooks™ to be as helpful as
        possible. Send us your comments about this or other Redbooks in one of the
        following ways:
           Use the online Contact us review redbook form found at:
              ibm.com/redbooks
           Send your comments in an e-mail to:
              redbook@us.ibm.com
           Mail your comments to:
              IBM Corporation, International Technical Support Organization
              Dept. JN9B Building 905 Internal Zip 2834
              11501 Burnet Road
              Austin, Texas 78758-3493




                                                                           Preface   xiii
xiv   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
1


    Chapter 1.    Introduction
                  IBM Tivoli Workload Scheduler for z/OS Version 8.2 introduces many new
                  features and further integrates the OPC-based and Maestro-based scheduling
                  engines.

                  In this chapter, we give a brief introduction to the IBM Tivoli Workload Scheduler
                  8.2 suite and summarize the functions that are introduced in Version 8.2:
                      “Job scheduling” on page 2
                      “Introduction to end-to-end scheduling” on page 3
                      “Introduction to Tivoli Workload Scheduler for z/OS” on page 4
                      “Introduction to Tivoli Workload Scheduler” on page 5.2
                      “Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli
                      Workload Scheduler” on page 7
                      “Summary of enhancements in V8.2 related to end-to-end scheduling” on
                      page 8
                      “The terminology used in this book” on page 21




© Copyright IBM Corp. 2004                                                                         1
1.1 Job scheduling
                 Scheduling is the nucleus of the data center. Orderly, reliable sequencing and
                 management of process execution is an essential part of IT management. The IT
                 environment consists of multiple strategic applications, such as SAP/3 and
                 Oracle, payroll, invoicing, e-commerce, and order handling. These applications
                 run on many different operating systems and platforms. Legacy systems must be
                 maintained and integrated with newer systems.

                 Workloads are increasing, accelerated by electronic commerce. Staffing and
                 training requirements increase, and many platform experts are needed. There
                 are too many consoles and no overall point of control. Constant (24x7) availability
                 is essential and must be maintained through migrations, mergers, acquisitions,
                 and consolidations.

                 Dependencies exist between jobs in different environments. For example, a
                 customer can use a Web browser to fill out an order form that triggers a UNIX®
                 job that acknowledges the order, an AS/400® job that orders parts, a z/OS job
                 that debits the customer’s bank account, and a Windows NT® job that prints an
                 invoice and address label. Each job must run only after the job before it has
                 completed.

                 The IBM Tivoli Workload Scheduler Version 8.2 suite provides an integrated
                 solution for running this kind of complicated workload. Its Job Scheduling
                 Console provides a centralized point of control and unified interface for managing
                 the workload regardless of the platform or operating system on which the jobs
                 run.

                 The Tivoli Workload Scheduler 8.2 suite includes IBM Tivoli Workload Scheduler,
                 IBM Tivoli Workload Scheduler for z/OS, and the Job Scheduling Console. Tivoli
                 Workload Scheduler and Tivoli Workload Scheduler for z/OS can be used
                 separately or together.

                 End-to-end scheduling means using both products together, with an IBM
                 mainframe acting as the scheduling controller for a network of other
                 workstations.

                 Because Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS have
                 different histories and work on different platforms, someone who is familiar with
                 one of the programs may not be familiar with the other. For this reason, we give a
                 short introduction to each product separately and then proceed to discuss how
                 the two programs work together.




2   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
1.2 Introduction to end-to-end scheduling
         End-to-end scheduling means scheduling workload across all computing
         resources in your enterprise, from the mainframe in your data center, to the
         servers in your regional headquarters, all the way to the workstations in your
         local office. The Tivoli Workload Scheduler end-to-end scheduling solution is a
         system whereby scheduling throughout the network is defined, managed,
         controlled, and tracked from a single IBM mainframe or sysplex.

         End-to-end scheduling requires using two different programs: Tivoli Workload
         Scheduler for z/OS on the mainframe, and Tivoli Workload Scheduler on other
         operating systems (UNIX, Windows®, and OS/400®). This is shown in
         Figure 1-1.


          MASTERDM
                                                                                      Tivoli
                            Master Domain      z/OS                                   Workload
                              Manager                                                 Scheduler
                            OPCMASTER                                                 for z/OS


          DomainA                                             DomainB
                                 AIX
                                                               HPUX
                Domain                           Domain
                Manager                          Manager
                 DMA                              DMB                                 Tivoli
                                                                                      Workload
                                                                                      Scheduler

             FTA1              FTA2           FTA3            FTA4

                    Linux          OS/400        Windows XP          Solaris


         Figure 1-1 Both schedulers are required for end-to-end scheduling

         Despite the similar names, Tivoli Workload Scheduler for z/OS and Tivoli
         Workload Scheduler are quite different and have distinct histories. IBM Tivoli
         Workload Scheduler for z/OS was originally called OPC. It was developed by IBM
         in the early days of the mainframe. IBM Tivoli Workload Scheduler was originally
         developed by a company called Unison Software. Unison was purchased by
         Tivoli, and Tivoli was then purchased by IBM.

         Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have slightly
         different ways of working, and programs have many features in common. IBM
         has continued development of both programs toward the goal of providing closer



                                                                         Chapter 1. Introduction   3
and closer integration between them. The reason for this integration is simple: to
                 facilitate an integrated scheduling system across all operating systems.

                 It should be obvious that end-to-end scheduling depends on using the mainframe
                 as the central point of control for the scheduling network. There are other ways to
                 integrate scheduling between z/OS and other operating systems. We will discuss
                 these in the following sections.



1.3 Introduction to Tivoli Workload Scheduler for z/OS
                 IBM Tivoli Workload Scheduler for z/OS has been scheduling and controlling
                 batch workloads in data centers since 1977. Originally called Operations
                 Planning and Control (OPC), the product has been extensively developed and
                 extended to meet the increasing demands of customers worldwide. An overnight
                 workload consisting of 100,000 production jobs is not unusual, and Tivoli
                 Workload Scheduler for z/OS can easily manage this kind of workload.


1.3.1 Overview of Tivoli Workload Scheduler for z/OS
                 IBM Tivoli Workload Scheduler for z/OS databases contain all of the information
                 about the work that is to be run, when it should run, and the resources that are
                 needed and available. This information is used to calculate a forecast called the
                 long-term plan. Data center staff can check this to confirm that the desired work
                 is being scheduled when required. The long-term plan usually covers a time
                 range of four to twelve weeks. The current plan is produced based on the
                 long-term plan and the databases. The current plan usually covers 24 hours and
                 is a detailed production schedule. Tivoli Workload Scheduler for z/OS uses the
                 current plan to submit jobs to the appropriate processor at the appropriate time.
                 All jobs in the current plan have Tivoli Workload Scheduler for z/OS status codes
                 that indicate the progress of work. When a job’s predecessors are complete,
                 Tivoli Workload Scheduler for z/OS considers it ready for submission. It verifies
                 that all requested resources are available, and when these conditions are met, it
                 causes the job to be submitted.


1.3.2 Tivoli Workload Scheduler for z/OS architecture
                 IBM Tivoli Workload Scheduler for z/OS consists of a controller and one or more
                 trackers. The controller, which runs on a z/OS system, manages the Tivoli
                 Workload Scheduler for z/OS and the long term and current plans. The controller
                 schedules work and causes jobs to be submitted to the appropriate system at the
                 appropriate time.




4   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Trackers are installed on every system managed by the controller. The tracker is
           the link between the controller and the managed system. The tracker submits
           jobs when the controller instructs it to do so, and it passes job start and job end
           information back to the controller.

           The controller can schedule jobs on z/OS system using trackers or on other
           operating systems using fault-tolerant agents (FTAs). FTAs can be run on many
           operating systems, including AIX, Linux®, Solaris, HP-UX, OS/400, and
           Windows. FTAs run IBM Tivoli Workload Scheduler, formerly called Maestro.

           The most common way of working with the controller is via ISPF panels.
           However, several other methods are available, including Program Interfaces,
           TSO commands, and the Job Scheduling Console.

           The Job Scheduling Console (JSC) is a Java™-based graphical user interface for
           controlling and monitoring workload on the mainframe and other platforms. The
           first version of JSC was released at the same time as Tivoli OPC Version 2.3.
           The current version of JSC (1.3) has been updated with several new functions
           specific to Tivoli Workload Scheduler for z/OS. JSC provides a common interface
           to both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler.

           For more information about IBM Tivoli Workload Scheduler for z/OS architecture,
           see Chapter 2, “End-to-end scheduling architecture” on page 25.



1.4 Introduction to Tivoli Workload Scheduler
           IBM Tivoli Workload Scheduler is descended from the Unison Maestro program.
           Unison Maestro was developed by Unison Software on the Hewlett-Packard MPE
           operating system. It was then ported to UNIX and Windows. In its various
           manifestations, Tivoli Workload Scheduler has a 17-year track record. During the
           processing day, Tivoli Workload Scheduler manages the production environment
           and automates most operator activities. It prepares jobs for execution, resolves
           interdependencies, and launches and tracks each job. Because jobs begin as
           soon as their dependencies are satisfied, idle time is minimized. Jobs never run
           out of sequence. If a job fails, IBM Tivoli Workload Scheduler can handle the
           recovery process with little or no operator intervention.


1.4.1 Overview of IBM Tivoli Workload Scheduler
           As with IBM Tivoli Workload Scheduler for z/OS, there are two basic aspects to
           job scheduling in IBM Tivoli Workload Scheduler: The database and the plan.
           The database contains all definitions for scheduling objects, such as jobs, job
           streams, resources, and workstations. It also holds statistics of job and job
           stream execution, as well as information on the user ID that created an object


                                                                       Chapter 1. Introduction   5
and when an object was last modified. The plan contains all job scheduling
                 activity planned for a period of one day. In IBM Tivoli Workload Scheduler, the
                 plan is created every 24 hours and consists of all the jobs, job streams, and
                 dependency objects that are scheduled to execute for that day. Job streams that
                 do not complete successfully can be carried forward into the next day’s plan.


1.4.2 IBM Tivoli Workload Scheduler architecture
                 A typical IBM Tivoli Workload Scheduler network consists of a master domain
                 manager, domain managers, and fault-tolerant agents. The master domain
                 manager, sometimes referred to as just the master, contains the centralized
                 database files that store all defined scheduling objects. The master creates the
                 plan, called Symphony, at the start of each day.

                 Each domain manager is responsible for distribution of the plan to the
                 fault-tolerant agents (FTAs) in its domain. A domain manager also handles
                 resolution of dependencies between FTAs in its domain.

                 FTAs are the workhorses of a Tivoli Workload Scheduler network. FTAs are
                 where most jobs are run. As their name implies, fault-tolerant agents are fault
                 tolerant. This means that in the event of a loss of communication with the domain
                 manager, FTAs are capable of resolving local dependencies and launching their
                 jobs without interruption. FTAs are capable of this because each FTA has its own
                 copy of the plan. The plan contains a complete set of scheduling instructions for
                 the production day. Similarly, a domain manager can resolve dependencies
                 between FTAs in its domain even in the event of a loss of communication with the
                 master, because the domain manager’s plan receives updates from all
                 subordinate FTAs and contains the authoritative status of all jobs in that domain.

                 The master domain manager is updated with the status of all jobs in the entire
                 IBM Tivoli Workload Scheduler network. Logging and monitoring of the IBM Tivoli
                 Workload Scheduler network is performed on the master.

                 Starting with Tivoli Workload Scheduler Version 7.0, a new Java-based graphical
                 user interface was made available to provide an easy-to-use interface to Tivoli
                 Workload Scheduler. This new GUI is called Job Scheduling Console (JSC). The
                 current version of JSC has been updated with several functions specific to Tivoli
                 Workload Scheduler. The JSC provides a common interface to both Tivoli
                 Workload Scheduler and Tivoli Workload Scheduler for z/OS.

                 For more about IBM Tivoli Workload Scheduler architecture, see Chapter 2,
                 “End-to-end scheduling architecture” on page 25.




6   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
1.5 Benefits of integrating Tivoli Workload Scheduler for
z/OS and Tivoli Workload Scheduler
         Both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have
         individual strengths. While an enterprise running mainframe and non-mainframe
         systems could schedule and control work using only one of these tools or using
         both tools separately, a complete solution requires that Tivoli Workload
         Scheduler for z/OS and Tivoli Workload Scheduler work together.

         The Tivoli Workload Scheduler for z/OS long-term plan gives peace of mind by
         showing the workload forecast weeks or months into the future. Tivoli Workload
         Scheduler fault-tolerant agents go right on running jobs even if they lose
         communication with the domain manager. Tivoli Workload Scheduler for z/OS
         manages huge numbers of jobs through a sysplex of connected z/OS systems.
         Tivoli Workload Scheduler extended agents can control work on applications
         such as SAP R/3 and Oracle.

         Many data centers need to schedule significant amounts of both mainframe and
         non-mainframe jobs. It is often desirable to have a single point of control for
         scheduling on all systems in the enterprise, regardless of platform, operating
         system, or application. These businesses would probably benefit from
         implementing the end-to-end scheduling configuration. End-to-end scheduling
         enables the business to make the most of its computing resources.

         That said, the end-to-end scheduling configuration is not necessarily the best
         way to go for every enterprise. Some computing environments would probably
         benefit from keeping their mainframe and non-mainframe schedulers separate.
         Others would be better served by integrating the two schedulers in a different
         way (for example, z/OS [or MVS™] extended agents). Enterprises with a majority
         of jobs running on UNIX and Windows servers might not want to cede control of
         these jobs to the mainframe. Because the end-to-end solution involves software
         components on both mainframe and non-mainframe systems, there will have to
         be a high level of cooperation between your mainframe operators and your UNIX
         and Windows system administrators. Careful consideration of the requirements
         of end-to-end scheduling is necessary before going down this path.

         There are also several important decisions that must be made before beginning
         an implementation of end-to-end scheduling. For example, there is a trade-off
         between centralized control and fault tolerance. Careful planning now can save
         you time and trouble later. In Chapter 3, “Planning end-to-end scheduling with
         Tivoli Workload Scheduler 8.2” on page 109, we explain in detail the decisions
         that must be made prior to implementation. We strongly recommend that you
         read this chapter in full before beginning any implementation.




                                                                  Chapter 1. Introduction   7
1.6 Summary of enhancements in V8.2 related to
end-to-end scheduling
                 Version 8.2 is the latest version of both IBM Tivoli Workload Scheduler and IBM
                 Tivoli Workload Scheduler for z/OS. In this section we cover the new functions
                 that affect end-to-end scheduling in three categories.


1.6.1 New functions related with performance and scalability
                 Several features are now available with IBM Tivoli Workload Scheduler for z/OS
                 8.2 that directly or indirectly affect performance.

                 Multiple first-level domain managers
                 In IBM Tivoli Workload Scheduler for z/OS 8.1, there was a limitation of only one
                 first-level domain manager (called the primary domain manager). In Version 8.2,
                 you can have multiple first-level domain managers (that is, the level immediately
                 below OPCMASTER). See Figure 1-2 on page 9.

                 This allows greater flexibility and scalability and eliminates a potential
                 performance bottleneck. It also allows greater freedom in defining your Tivoli
                 Workload Scheduler distributed network.




8   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
OPCMASTER                                       z/OS

                                              Master
                                             Domain
                                             Manager




  DomainZ                                                      DomainY
                               AIX                                                          AIX
                  Domain                                                  Domain
                  Manager                                                 Manager
                   DMZ                                                     DMY




 DomainA                                                     DomainB          DomainC
                                                                                                  HPUX
                 AIX                                         HPUX
      Domain                                Domain                                    Domain
      Manager                               Manager                                   Manager
       DMA                                   DMB                                       DMC




   FTA1         FTA2                 FTA3                    FTA4

          AIX          Linux           Windows 2000             Solaris


Figure 1-2 IBM Tivoli Workload Scheduler network with two first-level domains


Improved SCRIPTLIB parser
The job definitions for non-centralized scripts are kept in members in the
SCRPTLIB data set (EQQSCLIB DD statement). The definitions are specified in
keywords and parameter definitions. See example below:
Example 1-1 SCRPTLIB dataset
BROWSE    TWS.INST.SCRPTLIB(AIXJOB01) - 01.08        Line 00000000 Col 001
 Command ===>                                                  Scroll ===>
********************************* Top of Data *****************************
/* Job to be executed on AIX machines                                 */
VARSUB
   TABLES(FTWTABLE)
   PREFIX('&')
   VARFAIL(YES)
   TRUNCATE(NO)
JOBREC
   JOBSCR('&TWSHOME./scripts/return_rc.sh 2')
   RCCONDSUCC('(RC=4) OR (RC=6)')
RECOVERY
   OPTION(STOP)
   MESSAGE('Reply Yes when OK to continue')




                                                                                    Chapter 1. Introduction   9
******************************** Bottom of Data ***************************


                 The information in the SCRPTLIB member must be parsed every time a job is
                 added to the Symphony file (both at Symphony creation or dynamically).

                 In IBM Tivoli Workload Scheduler 8.1, the TSO parser was used, but this caused
                 a major performance issue: up to 70% of the time that it took to create a
                 Symphony file was spent parsing the SCRIPTLIB library members. In Version
                 8.2, a new parser has been implemented that significantly reduces the parsing
                 time and consequently the Symphony file creation time.

                 Check server status before Symphony file creation
                 In an end-to-end configuration, daily planning batch jobs require that both the
                 controller and server are active to be able to synchronize all the tasks and avoid
                 unprocessed events being left in the event files. If the server is not active the
                 daily planning batch process now fails at the beginning to avoid pointless extra
                 processing. Two new log messages show the status of the end-to-end server:
                     EQQ3120E END-TO-END SERVER NOT AVAILABLE
                     EQQZ193I END-TO-END TRANSLATOR SERVER PROCESS IS NOW AVAILABLE

                 Improved job log retrieval performance
                 In IBM Tivoli Workload Scheduler 8.1, the thread structure of the Translator
                 process implied that only usual incoming events were immediately notified to the
                 controller; job log events were detected by the controller only when another event
                 arrived or after a 30-second timeout.

                 In IBM Tivoli Workload Scheduler 8.2, a new input-writer thread has been
                 implemented that manages the writing of events to the input queue and takes
                 input from both the input translator and the job log retriever. This enables the job
                 log retriever to test whether there is room on the input queue and if not, it loops
                 until enough space is available. Meanwhile the input translator can continue to
                 write its smaller events to the queue.


1.6.2 General enhancements
                 In this section, we cover enhancements in the general category.

                 Centralized Script Library Management
                 In order to ease the migration path from OPC tracker agents to IBM Tivoli
                 Workload Scheduler Distributed Agents, a new function has been introduced in
                 Tivoli Workload Scheduler 8.2 called Centralized Script Library Management (or
                 Centralized Scripting). It is now possible to use the Tivoli Workload Scheduler for
                 z/OS engine as the centralized repository for scripts of distributed jobs.


10   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Centralized script is stored in the JOBLIB and it provides features that were on
OPC tracker agents such as:
   JCL Editing
   Variable substitution and Job Setup
   Automatic Recovery
   Support for usage of the job-submit exit (EQQUX001)

 Note: Centralized script feature is not supported for fault tolerant jobs running
 on an AS/400 fault tolerant agent.

Rules for defining centralized scripts
To define a centralized script in the JOBLIB, the following rules must be
considered:
   The lines that start with //* OPC, //*%OPC, and //*>OPC are used for the
   variable substitution and the automatic recovery. They are removed before the
   script is downloaded on the distributed agent.
   Each line starts from column 1 to column 80.
   Backslash () at column 80 is the character of continuation.
   Blanks at the end of the line are automatically removed.

These rules guarantee the compatibility with the old tracker agent jobs.

 Note: The SCRIPTLIB follows the TSO rules, so the rules to define a
 centralized script in the JOBLIB differ from those to define the JOBSCR and
 JOBCMD of a non-centralized script.

For more details, refer to 4.5.2, “Definition of centralized scripts” on page 219.

A new data set, EQQTWSCS, has been introduced with this new release to
facilitate centralized scripting. EQQTWSCS is a PDSE data set used to
temporarily store a script when it is downloaded from the JOBLIB data set to the
agent for its submission.

User interface changes for the centralized script
Centralized Scripting required changes to several Tivoli Workload Scheduler for
z/OS interfaces such as ISPF, Job Scheduling Console, and a number of batch
interfaces. In this section, we cover the changes to the user interfaces ISPF and
Job Scheduling Console.

In ISPF, a new job option has been added to specify whether an operation that
runs on a fault tolerant workstation has a centralized script. It can value Y/N:
   Y if the job has the script stored centrally in the JOBLIB.


                                                           Chapter 1. Introduction   11
N if the script is stored locally and the job has the job definition in the
                     SCRIPTLIB.

                 In a database, the value of this new job option can be modified during the
                 add/modify of an application or operation. It can be set for every operation,
                 without workstation checking. When a new operation is created, the default value
                 for this option is N. For non-FTW (Fault Tolerant Workstation) operations, the
                 value of the option is automatically changed to Y during Daily Plan or when
                 exiting the Modify an occurrence or Create an occurrence dialog.

                 The new Centralized Script option was added for operations in the Application
                 Description database and is always editable (Figure 1-3).




                 Figure 1-3 CENTRALIZED SCRIPT option in the AD dialog

                 The Centralized Script option also has been added for operations in the current
                 plan. It is editable only when adding a new operation. It can be browsed when
                 modifying an operation (Figure 1-4 on page 13).




12   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 1-4 CENTRALIZED SCRIPT option in the CP dialog

Similarly, Centralized Script has been added in the Job Scheduling Console
dialog for creating an FTW task, as shown in Figure 1-5.




Figure 1-5 Centralized Script option in the JSC dialog




                                                         Chapter 1. Introduction   13
Considerations when using centralized scripts
                 Using centralized scripts can ease the migration path from OPC tracker agents to
                 FTAs. It is also easier to maintain the centralized scripts because they are kept in
                 a central location, but these benefits come with some limitations. When deciding
                 whether to store the script locally or centrally, take into consideration that:
                     The script must be downloaded every time a job runs. There is no caching
                     mechanism on the FTA. The script is discarded as soon as the job completes.
                     A rerun of a centralized job causes the script to be downloaded again.
                     There is a reduction in the fault tolerance, because the centralized
                     dependency can be released only by the controller.

                 Recovery for non-centralized jobs
                 In Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the job
                 definition to specify recovery options and actions. Recovery is performed
                 automatically on the FTA in case of an abend. By this feature, it is now possible
                 to use the recovery for jobs running in a end-to-end network as implemented in
                 IBM Tivoli Workload Scheduler distributed.

                 Defining recovery for non-centralized jobs
                 To activate the recovery for a non-centralized job, you have to specify the
                 RECOVERY statement in the job member in the scriptlib.

                 It is possible to specify one or both of the following recovery actions:
                     A recovery job (JOBCMD or JOBSCR keywords)
                     A recovery prompt (MESSAGE keyword)

                 The recovery actions must be followed by one of the recovery options (the
                 OPTION keyword), stop, continue, or rerun. The default is stop with no recovery
                 job and no recovery prompt.

                 Figure 1-6 on page 15 shows the syntax of the RECOVERY statement.




14   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 1-6 Syntax of the RECOVERY statement

                 The keywords JOBUSR, JOBWS, INTRACTV, and RCCONDSUC can be used
                 only if you have defined a recovery job using the JOBSCR or JOBCMD keyword.

                 You cannot use the recovery prompt if you specify the recovery STOP option
                 without using a recovery job. Having the OPTION(RERUN) and no recovery
                 prompt specified could cause a loop. To prevent this situation, after a failed rerun
                 of the job, a recovery prompt message is shown automatically.

                   Note: The RECOVERY statement is ignored if it is used with a job that runs a
                   centralized script.

                 For more details, refer to 4.5.3, “Definition of non-centralized scripts” on
                 page 221.

                 Recovery actions available
                 The following table describes the recovery actions that can be taken against a job
                 that ended in error (and not failed). Note that JobP is the principal job, while JobR
                 is the recovery job.
Table 1-1 The recovery actions taken against a job ended in error
 ACTION/OPTION         Stop                      Continue                    Rerun

 No recovery           JobP remains in error.    JobP is completed.          Rerun JobP.
 prompt/No
 recovery job

 A recovery            Issue the prompt. JobP    Issue recovery prompt. If   Issue the prompt. If 'no'
 prompt/No             remains in error.         “yes” reply, JobP is        reply, JobP remains in
 recovery job                                    completed. If 'no' reply,   error. If “yes” reply, rerun
                                                 JobP remains in error.      JobP.



                                                                             Chapter 1. Introduction    15
ACTION/OPTION         Stop                         Continue                      Rerun

 No recovery           Launch JobR.                 Launch JobR. JobP is          Launch JobR.
 prompt/A recovery     If it is successful, JobP    completed.                    If it is successful, rerun
 job                   is completed; otherwise                                    JobP; otherwise JobP
                       JobP remains in error.                                     remains in error.

 A recovery            Issue the prompt. If 'no'    Issue the prompt.             Issue the prompt. If 'no'
 prompt/A recovery     reply, JobP remains in       If 'no' reply, JobP remains   reply, JobP remains in
 job                   error. If “yes” reply:       in error.                     error. If “yes” reply:
                           Launch JobR.             If “yes” reply:                   Launch JobR.
                           If it is successful,        Launch JobR.                   If it is successful,
                           JobP is completed;          JobP is completed.             rerun JobP; otherwise
                           otherwise JobP                                             JobP remains in error.
                           remains in error.


                 Job Instance Recovery Information panels
                 Figure 1-7 shows the Job Scheduling Console Job Instance Recovery
                 Information panel. You can browse the job log of the recovery job, and you can
                 reply prompt. Note the fields in the Job Scheduling Console panel and JOBREC
                 parameters mapping.




Figure 1-7 JSC and JOBREC parameters mapping



16   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Also note that you can access the same information from the ISPF panels. From
the Operation list in MCP (5.3), if the operation is abended and the RECOVERY
statement has been used, you can use the row command RI (Recovery
Information) to display the new panel EQQRINP as shown in Figure 1-8.




Figure 1-8 EQQRINP ISPF panel


Variable substitution for non-centralized jobs
In Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the job
definition to specify Variable Substitution Directives. This provides the capability
to use the variable substitution for jobs running in an end-to-end network without
using the centralized script solution.

Tivoli Workload Scheduler for z/OS–supplied variables and user-defined
variables (defined using a table) are supported in this new function. Variables are
substituted when a job is added to Symphony (that is, when the Daily Planning
creates the Symphony or the job is added to the plan using the MCP dialog).

To activate the variable substitution, use the VARSUB statement. The syntax of
the VARSUB statement is given in Figure 1-9 on page 18. Note that it must be
the first one in the SCRPTLIB member containing the job definition. The
VARSUB statement enables you to specify variables when you set a statement
keyword in the job definition.




                                                           Chapter 1. Introduction   17
Figure 1-9 Syntax of the VARSUB statement

                 Use the TABLES keyword to identify the variable tables that must be searched
                 and the search order. In particular:
                     APPL indicates the application variable table specified in the VARIABLE
                     TABLE field on the MCP panel, at Occurrence level.
                     GLOBAL indicates the table defined in the GTABLE keyword of the
                     OPCOPTS controller and BATCHOPT batch options.

                 Any non-alphanumeric character, except blanks, can be used as a symbol to
                 indicate that the characters that follow represent a variable. You can define two
                 kinds of symbols using the PREFIX or BACKPREF keywords in the VARSUB
                 statement; it allows you to define simple and compound variables.

                 For more details, refer to 4.5.3, “Definition of non-centralized scripts” on
                 page 221, and “Job Tailoring” in IBM Tivoli Workload Scheduler for z/OS
                 Managing the Workload, SC32-1263.

                 Return code mapping
                 In Tivoli Workload Scheduler 8.1, if a fault tolerant job ends with a return code
                 greater then 0 it is considered as abended.

                 It should be possible to define whether a job is successful or abended according
                 to a “success condition” defined at job level. This would supply the NOERROR
                 functionality, supported only for host jobs.

                 In Tivoli Workload Scheduler 8.2 for z/OS, a new keyword (RCCONDSUC) has
                 been added in the job definition to specify the success condition. Tivoli Workload
                 Scheduler 8.2 for z/OS interfaces show the operations return code.

                 Customize the JOBREC and the RECOVERY statements in the SCRIPTLIB to
                 specify a success condition for the job adding the RCCONDSUC keyword. The
                 success condition expression can contain a combination of comparison and
                 Boolean expressions.




18   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Comparison expression
Comparison expression specifies the job return codes. The syntax is: (RC
operator operand)-
RC                           The RC keyword.
Operand                      An integer between -2147483647 and
                             2147483647.
Operator Comparison operator Table 1-2 lists the values it can have.
Table 1-2 Operator Comparison operator values
 Example           Operator          Description

 RC < a            <                 Less than

 RC <= a           <=                Less than or
                                     equal to

 RC> a             >                 Greater than

 RC >= a           >=                Greater than
                                     or equal to

 RC = a            =                 Equal to

 RC <> a           <>                Not equal to


 Note: Unlike IBM Tivoli Workload Scheduler distributed, the != operator is not
 supported to specify a ‘not equal to’ condition.

The successful RC is specified by a logical combination of comparison
expressions. The syntax is: comparison_expression operator
comparison_expression.

For example, you can define a successful job as a job that ends with a return
code less than 3 or equal to 5 as follows:
   RCCONDSUC(“(RC<3) OR (RC=5)“)

 Note: If you do not specify the RCCONDSUC, only a return code equal to zero
 corresponds to a successful condition.


Late job handling
In IBM Tivoli Workload Scheduler 8.2 distributed, a user can define a DEADLINE
time for a job or a job stream. If the job never started or if it is still executing after
the deadline time has passed, Tivoli Workload Scheduler informs the user about
the missed deadline.




                                                               Chapter 1. Introduction   19
IBM Tivoli Workload Scheduler for z/OS 8.2 now supports this function. In
                 Version 8.2, the user can specify and modify a deadline time for a job or a job
                 stream. If the job is running on a fault-tolerant agent, the deadline time is also
                 stored in the Symphony file, and it is managed locally by the FTA.

                 In an end-to-end network, the deadline is always defined for operations and
                 occurrences. Batchman process on USS does not check the deadline to improve
                 performances.


1.6.3 Security enhancements
                 This new version includes a number of security enhancements, which are
                 discussed in this section.

                 Firewall support in an end-to-end environment
                 For previous versions of Tivoli Workload Scheduler for z/OS, running the
                 commands to start or stop a workstation or to get the standard list requires
                 opening a direct TCP/IP connection between the originator and the destination
                 nodes. In a firewall environment, this forces users to break the firewall to open a
                 direct communication path between the Tivoli Workload Scheduler for z/OS
                 master and each fault-tolerant agent in the network.

                 In this version, it is now possible to enable the firewall support of Tivoli Workload
                 Scheduler in an end-to-end environment. If a firewall exists between a
                 workstation and its domain manager, in order to force the start, stop, and get job
                 output commands to go through the domain’s hierarchy, it is necessary to set the
                 FIREWALL option to YES in the CPUREC statement.

                 Example 1-2 shows a CPUREC definition that enables the firewall support.
                 Example 1-2 CPUREC definition with firewall support enabled
                 CPUREC     CPUNAME(TWAD)
                            CPUOS(WNT)
                            CPUNODE(jsgui)
                            CPUDOMAIN(maindom)
                            CPUTYPE(FTA)
                            FIREWALL(Y)


                 SSL support
                 It is now possible to enable the strong authentication and encryption (SSL)
                 support of IBM Tivoli Workload Scheduler in an end-to-end environment.

                 You can enable the Tivoli Workload Scheduler processes that run as USS (UNIX
                 System Services) processes in the Tivoli Workload Scheduler for z/OS address



20   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
space to establish SSL authentication between a Tivoli Workload Scheduler for
         z/OS master and the underlying IBM Tivoli Workload Scheduler domain
         managers.

         The authentication mechanism of IBM Tivoli Workload Scheduler is based on the
         OpenSSL toolkit, while IBM Tivoli Workload Scheduler for z/OS uses the System
         SSL services of z/OS.

         To enable SSL authentication for your end-to-end network, you must perform the
         following actions:
         1. Create as many private keys, certificates, and trusted certification authority
            (CA) chains as you plan to use in your network.
            Refer to the OS/390 V2R10.0 System SSL Programming Guide and
            Reference, SC23-3978, for further details about the SSL protocol.
         2. Customize the localopts file on IBM Tivoli Workload Scheduler workstations.
            To find how to enable SSL in the IBM Tivoli Workload Scheduler domain
            managers, refer to IBM Tivoli Workload Scheduler for z/OS Installation,
            SC32-1264.
         3. Configure IBM Tivoli Workload Scheduler for z/OS:
            – Customize localopts file on USS workdir.
            – Customize the TOPOLOGY statement for the OPCMASTER.
            – Customize CPUREC statements for every workstation in the net.

         Refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning,
         SC32-1265, for the SSL support in the Tivoli Workload Scheduler for z/OS.



1.7 The terminology used in this book
         The IBM Tivoli Workload Scheduler 8.2 suite comprises two somewhat different
         software programs, each with its own history and terminology. For this reason,
         there are sometimes two different and interchangeable names for the same
         thing. Other times, a term used in one context can have a different meaning in
         another context. To help clear up this confusion, we now introduce some of the
         terms and acronyms that will be used throughout the book. In order to make the
         terminology used in this book internally consistent, we adopted a system of
         terminology that may be a bit different than that used in the product
         documentation. So take a moment to read through this list, even if you are
         already familiar with the products.
         IBM Tivoli Workload Scheduler 8.2 suite




                                                                   Chapter 1. Introduction   21
The suite of programs that includes IBM Tivoli Workload
                                            Scheduler and IBM Tivoli Workload Scheduler for z/OS.
                                            These programs are used together to make end-to-end
                                            scheduling work. Sometimes called just IBM Tivoli
                                            Workload Scheduler.
                 IBM Tivoli Workload Scheduler
                                            This is the version of IBM Tivoli Workload Scheduler that
                                            runs on UNIX, OS/400, and Windows operating systems,
                                            as distinguished from IBM Tivoli Workload Scheduler for
                                            z/OS, a somewhat different program. Sometimes called
                                            IBM Tivoli Workload Scheduler Distributed. IBM Tivoli
                                            Workload Scheduler is based on the old Maestro
                                            program.
                 IBM Tivoli Workload Scheduler for z/OS
                                            This is the version of IBM Tivoli Workload Scheduler that
                                            runs on z/OS, as distinguished from IBM Tivoli Workload
                                            Scheduler (by itself, without the for z/OS specification).
                                            IBM Tivoli Workload Scheduler for z/OS is based on the
                                            old OPC program.
                 Master                     The top level of the IBM Tivoli Workload Scheduler or IBM
                                            Tivoli Workload Scheduler for z/OS scheduling network.
                                            Also called the master domain manager, because it is the
                                            domain manager of the MASTERDM (top-level) domain.
                 Domain manager             The agent responsible for handling dependency
                                            resolution for subordinate agents. Essentially an FTA with
                                            a few extra responsibilities.
                 Fault-tolerant agent An agent that keeps its own local copy of the plan file and
                                      can continue operation even if the connection to the
                                      parent domain manager is lost. Also called an FTA. In IBM
                                      Tivoli Workload Scheduler for z/OS, FTAs are referred to
                                      as fault tolerant workstations.
                 Scheduling engine          An IBM Tivoli Workload Scheduler engine or IBM Tivoli
                                            Workload Scheduler for z/OS engine.
                 IBM Tivoli Workload Scheduler engine
                                            The part of IBM Tivoli Workload Scheduler that does
                                            actual scheduling work, as distinguished from the other
                                            components that are related primarily to the user interface
                                            (for example, the IBM Tivoli Workload Scheduler
                                            connector). Essentially the part of IBM Tivoli Workload
                                            Scheduler that is descended from the old Maestro
                                            program.


22   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
IBM Tivoli Workload Scheduler for z/OS engine
                     The part of IBM Tivoli Workload Scheduler for z/OS that
                     does actual scheduling work, as distinguished from the
                     other components that are related primarily to the user
                     interface (for example, the IBM Tivoli Workload Scheduler
                     for z/OS connector). Essentially the controller plus the
                     server.
IBM Tivoli Workload Scheduler for z/OS controller
                     The part of the IBM Tivoli Workload Scheduler for z/OS
                     engine that is based on the old OPC program.
IBM Tivoli Workload Scheduler for z/OS server
                     The part of IBM Tivoli Workload Scheduler for z/OS that is
                     based on the UNIX IBM Tivoli Workload Scheduler code.
                     Runs in UNIX System Services (USS) on the mainframe.
JSC                  Job Scheduling Console. This is the common graphical
                     user interface (GUI) to both the IBM Tivoli Workload
                     Scheduler and IBM Tivoli Workload Scheduler for z/OS
                     scheduling engines.
Connector            A small program that provides an interface between the
                     common GUI (Job Scheduling Console) and one or more
                     scheduling engines. The connector translates to and from
                     the different “languages” used by the different scheduling
                     engines.
JSS                  Job Scheduling Services. Essentially a library that is used
                     by the connectors.
TMF                  Tivoli Management Framework. Also called just the
                     Framework.




                                                       Chapter 1. Introduction   23
24   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
2


    Chapter 2.    End-to-end scheduling
                  architecture
                  End-to-end scheduling involves running programs on multiple platforms. For this
                  reason, it is important to understand how the different components work
                  together. Taking the time to get acquainted with end-to-end scheduling
                  architecture will make it easier for you to install, use, and troubleshoot your
                  end-to-end scheduling system.

                  In this chapter, the following topics are discussed:
                      “IBM Tivoli Workload Scheduler for z/OS architecture” on page 27
                      “Tivoli Workload Scheduler architecture” on page 50
                      “End-to-end scheduling architecture” on page 59
                      “Job Scheduling Console and related components” on page 89

                  If you are unfamiliar with IBM Tivoli Workload Scheduler for z/OS, you can start
                  with the section about its architecture to get a better understanding of how it
                  works.

                  If you are already familiar with Tivoli Workload Scheduler for z/OS but would like
                  to learn more about IBM Tivoli Workload Scheduler (for other platforms such as
                  UNIX, Windows, or OS/400), you can skip to that section.




© Copyright IBM Corp. 2004                                                                       25
If you are already familiar with both IBM Tivoli Workload Scheduler and IBM
                 Tivoli Workload Scheduler for z/OS, skip ahead to the third section, in which we
                 describe how both programs work together when configured as an end-to-end
                 network.

                 The Job Scheduling Console, its components, and its architecture, are described
                 in the last topic. In this topic, we describe the different components that are used
                 to establish a Job Scheduling Console environment.




26   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
2.1 IBM Tivoli Workload Scheduler for z/OS architecture
         IBM Tivoli Workload Scheduler for z/OS expands the scope for automating your
         data processing operations. It plans and automatically schedules the production
         workload. From a single point of control, it drives and controls the workload
         processing at both local and remote sites. By using IBM Tivoli Workload
         Scheduler for z/OS to increase automation, you use your data processing
         resources more efficiently, have more control over your data processing assets,
         and manage your production workload processing better.

         IBM Tivoli Workload Scheduler for z/OS is composed of three major features:
            The IBM Tivoli Workload Scheduler for z/OS agent feature
            The agent is the base product in IBM Tivoli Workload Scheduler for z/OS. The
            agent is also called a tracker. It must run on every operating system in your
            z/OS complex on which IBM Tivoli Workload Scheduler for z/OS controlled
            work runs. The agent records details of job starts and passes that information
            to the engine, which updates the plan with statuses.
            The IBM Tivoli Workload Scheduler for z/OS engine feature
            One z/OS operating system in your complex is designated the controlling
            system and it runs the engine. The engine is also called the controller. Only
            one engine feature is required, even when you want to establish standby
            engines on other z/OS systems in a sysplex.
            The engine manages the databases and the plans and causes the work to be
            submitted at the appropriate time and at the appropriate system in your z/OS
            sysplex or on another system in a connected z/OS sysplex or z/OS system.
            The IBM Tivoli Workload Scheduler for z/OS end-to-end feature
            This feature makes it possible for the IBM Tivoli Workload Scheduler for z/OS
            engine to manage a production workload in a Tivoli Workload Scheduler
            distributed environment. You can schedule, control, and monitor jobs in Tivoli
            Workload Scheduler from the Tivoli Workload Scheduler for z/OS engine with
            this feature.
            The end-to-end feature is covered in 2.3, “End-to-end scheduling
            architecture” on page 59.
            The workload on other operating environments can also be controlled with the
            open interfaces that are provided with Tivoli Workload Scheduler for z/OS.
            Sample programs using TCP/IP or a Network Job Entry/Remote Spooling
            Communication Subsystem (NJE/RSCS) combination show you how you can
            control the workload on environments that at present have no scheduling
            feature.




                                               Chapter 2. End-to-end scheduling architecture   27
In addition to these major parts, the IBM Tivoli Workload Scheduler for z/OS
                 product also contains the IBM Tivoli Workload Scheduler for z/OS connector and
                 the Job Scheduling Console (JSC).
                     IBM Tivoli Workload Scheduler for z/OS connector
                     Maps the Job Scheduling Console commands to the IBM Tivoli Workload
                     Scheduler for z/OS engine. The Tivoli Workload Scheduler for z/OS connector
                     requires that the Tivoli Management Framework be configured for a Tivoli
                     server or Tivoli managed node.
                     Job Scheduling Console
                     A Java-based graphical user interface (GUI) for the IBM Tivoli Workload
                     Scheduler suite.
                     The Job Scheduling Console runs on any machine from which you want to
                     manage Tivoli Workload Scheduler for z/OS engine plan and database
                     objects. It provides, through the IBM Tivoli Workload Scheduler for z/OS
                     connector, functionality similar to the IBM Tivoli Workload Scheduler for z/OS
                     legacy ISPF interface. You can use the Job Scheduling Console from any
                     machine as long as it has a TCP/IP link with the machine running the IBM
                     Tivoli Workload Scheduler for z/OS connector.
                     The same Job Scheduling Console can be used for Tivoli Workload
                     Scheduler and Tivoli Workload Scheduler for z/OS.

                 In the next topics, we provide an overview of IBM Tivoli Workload Scheduler for
                 z/OS configuration, the architecture, and the terminology used in Tivoli Workload
                 Scheduler for z/OS.


2.1.1 Tivoli Workload Scheduler for z/OS configuration
                 IBM Tivoli Workload Scheduler for z/OS supports many configuration options
                 using a variety of communication methods:
                     The controlling system (the controller or engine)
                     Controlled z/OS systems
                     Remote panels and program interface applications
                     Job Scheduling Console
                     Scheduling jobs that are in a distributed environment using Tivoli Workload
                     Scheduler (described in 2.3, “End-to-end scheduling architecture” on
                     page 59)




28   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
The controlling system
The controlling system requires both the agent and the engine. One controlling
system can manage the production workload across all of your operating
environments.

The engine is the focal point of control and information. It contains the controlling
functions, the dialogs, the databases, the plans, and the scheduler’s own batch
programs for housekeeping and so forth. Only one engine is required to control
the entire installation, including local and remote systems.

Because IBM Tivoli Workload Scheduler for z/OS provides a single point of
control for your production workload, it is important to make this system
redundant. This minimizes the risk of having any outages in your production
workload in case the engine or the system with the engine fails. To make the
engine redundant, one can start backup engines (hot standby engines) on other
systems in the same sysplex as the active engine. If the active engine or the
controlling system fails, Tivoli Workload Scheduler for z/OS can automatically
transfer the controlling functions to a backup system within a Parallel Sysplex.
Through Cross Coupling Facility (XCF), IBM Tivoli Workload Scheduler for z/OS
can automatically maintain production workload processing during system
failures. The standby engine can be started on several z/OS systems in the
sysplex.

Figure 2-1 on page 30 shows an active engine with two standby engines running
in one sysplex. When an engine is started on a system in the sysplex, it will
check whether there is already an active engine in the sysplex. It there are no
active engines, it will be an active engine. If there is an active engine, it will be a
standby engine. The engine in Figure 2-1 on page 30 has connections to eight
agents: three in the sysplex, two remote, and three in another sysplex. The
agents on the remote systems and in the other sysplexes are connected to the
active engine via ACF/VTAM® connections.




                                         Chapter 2. End-to-end scheduling architecture   29
Agent                             Agent

                                               Standby                           Standby
                                               Engine                            Engine

                                                                      z/OS
                                                                    SYSPLEX

                                                          Agent

                                                          Active
                                                          Engine



                                  Remote                      VTAM        VTAM             Remote
                                  Agent                                                    Agent




                                                 Remote                          Remote
                                                 Agent                           Agent

                                                                      z/OS
                                                                    SYSPLEX



                                                           Remote
                                                           Agent




                 Figure 2-1 Two sysplex environments and stand-alone systems


                 Controlled z/OS systems
                 An agent is required for every controlled z/OS system in a configuration. This
                 includes, for example, locally controlled systems within shared DASD or sysplex
                 configurations.

                 The agent runs as a z/OS subsystem and interfaces with the operating system
                 through JES2 (Job Execution Subsystem) or JES3, and SMF (System
                 Management Facility), using the subsystem interface and the operating system
                 exits. The agent monitors and logs the status of work, and passes the status
                 information to the engine via shared DASD, XCF, or ACF/VTAM.

                 You can exploit z/OS and the cross-system coupling facility (XCF) to connect
                 your local z/OS systems. Rather than being passed to the controlling system via
                 shared DASD, work status information is passed directly via XCF connections.
                 XCF enables you to exploit all production-workload-restart facilities and its hot
                 standby function in Tivoli Workload Scheduler for z/OS.

                 Remote systems
                 The agent on a remote z/OS system passes status information about the
                 production work in progress to the engine on the controlling system. All
                 communication between Tivoli Workload Scheduler for z/OS subsystems on the
                 controlling and remote systems is done via ACF/VTAM.


30   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Tivoli Workload Scheduler for z/OS enables you to link remote systems using
ACF/VTAM networks. Remote systems are frequently used locally (on premises)
to reduce the complexity of the data processing installation.

Remote panels and program interface applications
ISPF panels and program interface (PIF) applications can run in a different z/OS
system than the one where the active engine is running. Dialogs and PIF
applications send requests to and receive data from a Tivoli Workload Scheduler
for z/OS server that is running on the same z/OS system as the target engine, via
advanced program-to-program communications (APPC). The APPC server
communicates with the active engine to perform the requested actions.

Using an APPC server for ISPF panels and PIF gives the user the freedom to run
ISPF panels and PIF on any system in a z/OS enterprise, as long as this system
has advanced program-to-program communication with the system where the
active engine is started. This also means that you do not have to make sure that
your PIF jobs always run on the z/OS system where the active engine is started.
Furthermore, using the APPC server makes it seamless for panel users and PIF
programs if the engine is moved to its backup engine.

The APPC server is a separate address space, started and stopped either
automatically by the engine, or by the user via the z/OS start command. There
can be more than one server for an engine. If the dialogs or the PIF applications
run on the same z/OS system as the target engine, the server may not be
involved. As shown in Figure 2-2 on page 32, it is possible to run the IBM Tivoli
Workload Scheduler for z/OS dialogs and PIF applications from any system as
long as the system has an ACF/VTAM connection to the APPC server.




                                      Chapter 2. End-to-end scheduling architecture   31
PIF program

                                                                                ISPF
                                                                   z/OS         panels
                                                                 SYSPLEX

                                                      Active
                                                      Engine

                                                      APPC
                                                      Server

                                                               VTAM    VTAM
                                 Remote                                          Remote
                                 System                                          System




                                                       Remote
                                                       System

                                   ISPF
                                                                              ISPF
                                   panels
                                                                              panels

                                                                PIF program




                 Figure 2-2 APPC server with remote panels and PIF access to ITWS for z/OS


                  Note: Job Scheduling Console is the GUI to both IBM Tivoli Workload
                  Scheduler for z/OS and IBM Tivoli Workload Scheduler. JSC is discussed in
                  2.4, “Job Scheduling Console and related components” on page 89.


2.1.2 Tivoli Workload Scheduler for z/OS database objects
                 Scheduling with IBM Tivoli Workload Scheduler for z/OS includes the capability
                 to do the following:
                     Schedule jobs across multiple systems local and remotely.
                     Group jobs into job streams according to, for example, function or application,
                     and define advanced run cycles based on customized calendars for the job
                     streams.
                     Set workload priorities and specify times for the submission of particular work.
                     Base submission of work on availability of resources.
                     Tailor jobs automatically based on dates, date calculations, and so forth.
                     Ensure correct processing order by identifying dependencies such as
                     successful completion of previous jobs, availability of resources, and time of
                     day.


32   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Define automatic recovery and restart for jobs.
   Forward incomplete jobs to the next production day.

This is accomplished by defining scheduling objects in the Tivoli Workload
Scheduler for z/OS databases that are managed by the active engine and shared
by the standby engines. Scheduling objects are combined in these databases so
that they represent the workload that you want to have handled by Tivoli
Workload Scheduler for z/OS.

Tivoli Workload Scheduler for z/OS databases contain information about the
work that is to be run, when it should be run, and the resources that are needed
and available. This information is used to calculate a forward forecast called the
long-term plan.

Scheduling objects are elements that are used to define your Tivoli Workload
Scheduler for z/OS workload. Scheduling objects include job streams (jobs and
dependencies as part of job streams), workstations, calendars, periods, operator
instructions, resources, and JCL variables.

All of these scheduling objects can be created, modified, or deleted by using the
legacy IBM Tivoli Workload Scheduler for z/OS ISPF panels. Job streams,
workstations, and resources can be managed from the Job Scheduling Console
as well.
Job streams       A job stream (also known as an application in the legacy OPC
                  ISPF interface) is a description of a unit of production work. It
                  includes a list of jobs (related tasks) that are associated with
                  that unit of work. For example, a payroll job stream might
                  include a manual task in which an operator prepares a job;
                  several computer-processing tasks in which programs are run
                  to read a database, update employee records, and write payroll
                  information to an output file; and a print task that prints
                  paychecks.
                  IBM Tivoli Workload Scheduler for z/OS schedules work based
                  on the information that you provide in your job stream
                  description. A job stream can include the following:
                    A list of the jobs (related tasks) that are associated with that
                    unit of work, such as:
                    – Data entry
                    – Job preparation
                    – Job submission or started-task initiation
                    – Communication with the NetView® program
                    – File transfer to other operating environments


                                        Chapter 2. End-to-end scheduling architecture   33
– Printing of output
                                         – Post-processing activities, such as quality control or
                                           dispatch
                                         – Other tasks related to the unit of work that you want to
                                           schedule, control, and track
                                        A description of dependencies between jobs within a job
                                        stream and between jobs in other job streams
                                        Information about resource requirements, such as exclusive
                                        use of a data set
                                        Special operator instructions that are associated with a job
                                        How, when, and where each job should be processed
                                        Run policies for that unit of work; that is, when it should be
                                        scheduled or, alternatively, the name of a group definition
                                        that records the run policy
                 Workstations         When scheduling and processing work, Tivoli Workload
                                      Scheduler for z/OS considers the processing requirements of
                                      each job. Some typical processing considerations are:
                                        What human or machine resources are required for
                                        processing the work (for example, operators, processors, or
                                        printers)?
                                        When are these resources available?
                                        How will these jobs be tracked?
                                        Can this work be processed somewhere else if the resources
                                        become unavailable?
                                      You can plan for maintenance windows in your hardware and
                                      software environments. Tivoli Workload Scheduler for z/OS
                                      enables you to perform a controlled and incident-free shutdown
                                      of the environment, preventing last-minute cancellation of
                                      active tasks. You can choose to reroute the workload
                                      automatically during any outage, planned or unplanned.
                                      Tivoli Workload Scheduler for z/OS tracks jobs as they are
                                      processed at workstations and dynamically updates the plan
                                      with real-time information on the status of jobs. You can view or
                                      modify this status information online using the workstation
                                      ready lists in the dialog.
                 Dependencies         In general, every data-processing-related activity must occur in
                                      a specific order. Activities performed out of order will, at the
                                      very least, create invalid output; in the worst case your



34   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
corporate data will be corrupted. In any case, the result is
                  costly reruns, missed deadlines, and unsatisfied customers.
                  You can define dependencies for jobs when a specific
                  processing order is required. When IBM Tivoli Workload
                  Scheduler for z/OS manages the dependent relationships, the
                  jobs are started in the correct order every time they are
                  scheduled. A dependency is called internal when it is between
                  two jobs in the same job stream, and external when it is
                  between two jobs in different job streams.
                  You can work with job dependencies graphically from the Job
                  Scheduling Console (Figure 2-3).




Figure 2-3 Job Scheduling Console display of dependencies between jobs

Calendars         Tivoli Workload Scheduler for z/OS uses information about
                  when the jobs departments work and when they are free, so
                  job streams are not scheduled to run on days when processing
                  resources are not available (such as Sundays and holidays).
                  This information is stored in a calendar. Tivoli Workload
                  Scheduler for z/OS supports multiple calendars for enterprises
                  where different departments have different work days and free




                                       Chapter 2. End-to-end scheduling architecture   35
days (different groups within a business operate according to
                                      different calendars).
                                      The multiple calendar function is critical if your enterprise has
                                      installations in more than one geographical location (for
                                      example, with different local or national holidays).
                 Resources            Tivoli Workload Scheduler for z/OS enables you to serialize
                                      work based on the status of any data processing resource. A
                                      typical example is a job that uses a data set as input but must
                                      not start until the data set is successfully created and loaded
                                      with valid data. You can use resource serialization support to
                                      send availability information about a data processing resource
                                      to the workload in Tivoli Workload Scheduler for z/OS. To
                                      accomplish this, Tivoli Workload Scheduler for z/OS uses
                                      resources (also called special resources).
                                      Resources are typically defined to represent physical or logical
                                      objects used by jobs. A resource can be used to serialize
                                      access to a data set or to limit the number of file transfers on a
                                      particular network link. The resource does not have to
                                      represent a physical object in your configuration, although it
                                      often does.
                                      Tivoli Workload Scheduler for z/OS keeps a record of the state
                                      of each resource and its current allocation status. You can
                                      choose to hold resources in case a job allocating the resources
                                      ends abnormally. You can also use the Tivoli Workload
                                      Scheduler for z/OS interface with the Resource Object Data
                                      Manager (RODM) to schedule jobs according to real resource
                                      availability. You can subscribe to RODM updates in both local
                                      and remote domains.
                                      Tivoli Workload Scheduler for z/OS enables you to subscribe to
                                      data set activity on z/OS systems. Its dataset triggering
                                      function automatically updates special resource availability
                                      when a data set is closed. You can use this notification to
                                      coordinate planned activities or to add unplanned work to the
                                      schedule.
                 Periods              Tivoli Workload Scheduler for z/OS uses business processing
                                      cycles, or periods, to calculate when your job streams should
                                      be run; for example, weekly or every 10th working day. Periods
                                      are based on the business cycles of your customers.
                                      Tivoli Workload Scheduler for z/OS supports a range of
                                      periods for processing the different job streams in your
                                      production workload. It has several predefined periods that can
                                      be used when defining run cycles for your job streams, such as


36   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
week, month, year, and all of the Julian months (January
                             through December).
                             When you define a job stream, you specify when it should be
                             planned using a run cycle, which can be:
                               A rule with a format such as:
                                  ONLY the SECOND TUESDAY of every MONTH
                                  EVERY FRIDAY in the user-defined period SEMESTER1
                               In this example, the words in capitals are selected from lists
                               of ordinal numbers, names of days, and common calendar
                               intervals or period names, respectively.
                               A combination of period and offset. For example, an offset of
                               10 in a monthly period specifies the tenth day of each
                               month.
           Operator instr.   You can specify an operator instruction to be associated with a
                             job in a job stream. This could be, for example, special running
                             instructions for a job or detailed restart information in case a
                             job abends and needs to be restarted.
           JCL variables     JCL variables are used to do automatic job tailoring in Tivoli
                             Workload Scheduler for z/OS. There are several predefined
                             JCL variables, such as current date, current time, planning
                             date, day number of week, and so forth. Besides these
                             predefined variables, you can define specific or unique
                             variables, so your local defined variables can be used for
                             automatic job tailoring as well.


2.1.3 Tivoli Workload Scheduler for z/OS plans
           IBM Tivoli Workload Scheduler for z/OS plans your production workload
           schedule. It produces both high-level (long-term) plan and detailed (current)
           plans. These plans drive the production workload and can show the status of the
           production workload on your system at any specified time. You can produce trial
           plans to forecast future workloads (for example, to simulate the effects of
           changes to your production workload, calendar, and installation).

           Tivoli Workload Scheduler for z/OS builds the plans from your description of the
           production workload (that is, the objects you have defined in the Tivoli Workload
           Scheduler for z/OS databases).

           The plan process
           First, the long-term plan is created, which shows the job streams that should be
           run each day in a period, usually for one or two months. Then a more detailed



                                                 Chapter 2. End-to-end scheduling architecture   37
current plan is created. The current plan is used by Tivoli Workload Scheduler for
                 z/OS to submit and control jobs and job streams.

                 Long-term planning
                 The long-term plan is a high-level schedule of your anticipated production
                 workload. It lists, by day, the instances of job streams to be run during the period
                 of the plan. Each instance of a job stream is called an occurrence. The long-term
                 plan shows when occurrences are to run, as well as the dependencies that exist
                 between the job streams. You can view these dependencies graphically on your
                 terminal as a network to check that work has been defined correctly. The plan
                 can assist you in forecasting and planning for heavy processing days. The
                 long-term-planning function can also produce histograms showing planned
                 resource use for individual workstations during the plan period.

                 You can use the long-term plan as the basis for documenting your service level
                 agreements. It lets you relate service level agreements directly to your production
                 workload schedules so that your customers can see when and how their work is
                 to be processed.

                 The long-term plan provides a window to the future. How far into the future is up
                 to you, from one day to four years. Normally, the long-term plan goes two to three
                 months into the future. You can also produce long-term plan simulation reports
                 for any future date. IBM Tivoli Workload Scheduler for z/OS can automatically
                 extend the long-term plan at regular intervals. You can print the long-term plan as
                 a report, or you can view, alter, and extend it online using the legacy ISPF
                 dialogs.

                 The long-term plan extension is performed by a Tivoli Workload Scheduler for
                 z/OS program. This program is normally run as part of the daily Tivoli Workload
                 Scheduler for z/OS housekeeping job stream. By running this program on
                 workdays and letting the program extend the long-term plan by one working day,
                 you assure that the long-term plan is always up-to-date (Figure 2-4 on page 39).




38   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Job
  Databases        Resources    Workstations
                                                     Streams
                                                                Calendars             Periods




                                                               1. Extend long term plan




                                                                            1
                                                                         workday
                                        90 days
  Long Term
  Plan                              Long Term Plan



Figure 2-4 The long-term plan extension process

This way the long-term plan always reflects changes that are made to job
streams, run cycles, and calendars, because these definitions are reread by the
program that extends the long-term plan. The long-term plan extension program
reads job streams (run cycles), calendars, and periods and creates the high-level
long-term plan based on these objects.

Current plan
The current plan, or simply the plan is the heart of Tivoli Workload Scheduler for
z/OS processing: In fact, it drives the production workload automatically and
provides a way to check its status. The current plan is produced by the run of
batch jobs that extract from the long-term plan the occurrences that fall within the
specified period of time, considering also the job details. The current plan selects
a window from the long-term plan and makes the jobs ready to be run. They will
actually be started depending on the decided restrictions (dependencies,
resources availability, or time-dependent jobs).

Job streams and related objects are copied from the Tivoli Workload Scheduler
for z/OS databases to the current plan occurrences. Because the objects are
copied to the current plan data set, any changes that are made to them in the
plan will not be reflected in the Tivoli Workload Scheduler for z/OS databases.

The current plan is a rolling plan that can cover several days. The extension of
the current plan is performed by a Tivoli Workload Scheduler for z/OS program
that normally is run on workdays as part of the daily workday-scheduled
housekeeping job stream (Figure 2-5 on page 40).



                                        Chapter 2. End-to-end scheduling architecture           39
Job
                      Databases        Resources           Workstations
                                                                             Streams
                                                                                             Calendars           Periods




                      Current
                      Plan              Old current plan
                                                                Remove completed
                                                                   job streams
                                                                                        Add detail
                                                                                       for next day      New current plan

                      Extension

                                                                                                         1
                                                                                                      workday
                                                                   90 days
                      Long Term
                      Plan                                        Long Term Plan


                                             today tomorrow


                 Figure 2-5 The current plan extension process

                 Extending the current plan by one workday means that it can cover more than
                 one calendar day. If, for example, Saturday and Sunday are considered as
                 Fridays (in the calendar used by the run cycle for the housekeeping job stream),
                 then when the current plan extension program is run on Friday afternoon and the
                 plan will go to Monday afternoon. A common method is to cover 1–2 days with
                 regular extensions each shift.

                 Production workload processing activities are listed by minute in the plan. You
                 can either print the current plan as a report, or view, alter, and extend it online, by
                 using the legacy ISPF dialogs.

                  Note: Changes that are made to the job stream run-cycle, such as changing
                  the job stream from running on Mondays to running on Tuesdays, will not be
                  reflected immediately in the long-term or current plan. To have such changes
                  reflected in the long-term plan and current plan you must first run a Modify all
                  or Extend long-term plan and then extend or replan the current plan.
                  Therefore, it is a good practice to run the Extend long-term plan with one
                  working day (shown in Figure 2-4 on page 39) before the Extend of current
                  plan as part of normal Tivoli Workload Scheduler for z/OS housekeeping.


                 Running job streams and jobs in the plan
                 Tivoli Workload Scheduler for z/OS automatically:
                     Starts and stops started tasks



40   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Edits z/OS job JCL statements before submission
   Submits jobs in the specified sequence to the target operating
   environment—every time
   Tracks each scheduled job in the plan
   Determines the success or failure of the jobs
   Displays status information and instructions to guide workstation operators
   Provides automatic recovery of z/OS jobs when they end in error
   Generates processing dates for your job stream run cycles using rules such
   as:
   – Every second Tuesday of the month
   – Only the last Saturday in June, July, and August
   – Every third workday in the user-defined PAYROLL period
   Starts jobs with regard to real resource availability
   Performs data set cleanup in error and rerun situations for the z/OS workload
   Tailors the JCL for step restarts of z/OS jobs and started tasks
   Dynamically schedules additional processing in response to activities that
   cannot be planned
   Provides automatic notification when an updated data set is closed, which
   can be used to trigger subsequent processing
   Generates alerts when abnormal situations are detected in the workload

Automatic workload submission
Tivoli Workload Scheduler for z/OS automatically drives work through the
system, taking into account work that requires manual or program-recorded
completion. (Program-recorded completion refers to situations where the status
of a scheduler-controlled job is set to Complete by a user-written program.) It
also promotes the optimum use of resources, improves system availability, and
automates complex and repetitive operator tasks. Tivoli Workload Scheduler for
z/OS automatically controls the submission of work according to:
   Dependencies between jobs
   Workload priorities
   Specified times for the submission of particular work
   Availability of resources

By saving a copy of the JCL for each separate run, or occurrence, of a particular
job in its plans, Tivoli Workload Scheduler for z/OS prevents the unintentional
reuse of temporary JCL changes, such as overrides.



                                       Chapter 2. End-to-end scheduling architecture   41
Job tailoring
                 Tivoli Workload Scheduler for z/OS provides automatic job-tailoring functions,
                 which enable jobs to be automatically edited. This can reduce your dependency
                 on time-consuming and error-prone manual editing of jobs. Tivoli Workload
                 Scheduler for z/OS job tailoring provides:
                     Automatic variable substitution
                     Dynamic inclusion and exclusion of inline job statements
                     Dynamic inclusion of job statements from other libraries or from an exit

                 For jobs to be submitted on a z/OS system, these job statements will be z/OS
                 JCL.

                 Variables can be substituted in specific columns, and you can define verification
                 criteria to ensure that invalid strings are not substituted. Special directives
                 supporting the variety of date formats used by job stream programs enable you
                 to dynamically define the required format and change them multiple times for the
                 same job. Arithmetic expressions can be defined to let you calculate values such
                 as the current date plus four work days.

                 Manual control and intervention
                 Tivoli Workload Scheduler for z/OS enables you to check the status of work and
                 intervene manually when priorities change or when you need to run unplanned
                 work. You can query the status of the production workload and then modify the
                 schedule if needed.

                 Status inquiries
                 With the legacy ISPF dialogs or with the Job Scheduling Console, you can make
                 queries online and receive timely information about the status of the production
                 workload.

                 Time information that is displayed by the dialogs can be in the local time of the
                 dialog user. Using the dialogs, you can request detailed or summary information
                 about individual job streams, jobs, and workstations, as well as summary
                 information concerning workload production as a whole. You can also display
                 dependencies graphically as a network at both job stream and job level.

                 Status inquiries:
                     Provide you with overall status information that you can use when considering
                     a change in workstation capacity or when arranging an extra shift or overtime
                     work.
                     Help you supervise the work flow through the installation; for instance, by
                     displaying the status of work at each workstation.




42   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Help you decide whether intervention is required to speed the processing of
   specific job streams. You can find out which job streams are the most critical.
   You can also check the status of any job stream, as well as the plans and
   actual times for each job.
   Enable you to check information before making modifications to the plan. For
   example, you can check the status of a job stream and its dependencies
   before deleting it or changing its input arrival time or deadline. See “Modifying
   the current plan” on page 43 for more information.
   Provide you with information about the status of processing at a particular
   workstation. Perhaps work that should have arrived at the workstation has not
   arrived. Status inquiries can help you locate the work and find out what has
   happened to it.

Modifying the current plan
Tivoli Workload Scheduler for z/OS makes status updates to the plan
automatically using its tracking functions. However, you can change the plan
manually to reflect unplanned changes to the workload or to the operations
environment, which often occur during a shift. For example, you may need to
change the priority of a job stream, add unplanned work, or reroute work from
one workstation to another. Or you may need to correct operational errors
manually. Modifying the current plan may be the best way to handle these
situations.

You can modify the current plan online. For example, you can:
   Include unexpected jobs or last-minute changes to the plan. Tivoli Workload
   Scheduler for z/OS then automatically creates the dependencies for this work.
   Manually modify the status of jobs.
   Delete occurrences of job streams.
   Graphically display job dependencies before you modify them.
   Modify the data in job streams, including the JCL.
   Respond to error situations by:
   – Rerouting jobs
   – Rerunning jobs or occurrences
   – Completing jobs or occurrences
   – Changing jobs or occurrences
   Change the status of workstations by:
   – Rerouting work from one workstation to another
   – Modifying workstation reporting attributes



                                       Chapter 2. End-to-end scheduling architecture   43
– Updating the availability of resources
                     – Changing the way resources are handled
                     Replan or extend the current plan.

                 In addition to using the dialogs, you can modify the current plan from your own
                 job streams using the program interface or the application programming
                 interface. You can also trigger Tivoli Workload Scheduler for z/OS to dynamically
                 modify the plan using TSO commands or a batch program. This enables
                 unexpected work to be added automatically to the plan.

                 It is important to remember that the current plan contains copies of the objects
                 that are read from the Tivoli Workload Scheduler for z/OS databases. This
                 means that changes that are made to current plan instances will not be reflected
                 in the corresponding database objects.


2.1.4 Other Tivoli Workload Scheduler for z/OS features
                 In the following sections we investigate other features of IBM Tivoli Workload
                 Scheduler for z/OS.

                 Automatically controlling the production workload
                 Tivoli Workload Scheduler for z/OS automatically drives the production workload
                 by monitoring the flow of work and by directing the processing of jobs so that it
                 follows the business priorities that are established in the plan.

                 Through its interface to the NetView program or its management-by-exception
                 ISPF dialog, Tivoli Workload Scheduler for z/OS can alert the production control
                 specialist to problems in the production workload processing. Furthermore, the
                 NetView program can automatically trigger Tivoli Workload Scheduler for z/OS to
                 perform corrective actions in response to these problems.

                 Recovery and restart
                 Tivoli Workload Scheduler for z/OS provides automatic restart facilities for your
                 production work. You can specify the restart actions to be taken if work that it
                 initiates ends in error (Figure 2-6 on page 45). You can use these functions to
                 predefine automatic error-recovery and restart actions for jobs and started tasks.
                 The scheduler’s integration with the NetView for OS/390 program enables it to
                 automatically pass alerts to the NetView for OS/390 in error situations. Use of the
                 z/OS cross-system coupling facility (XCF) enables Tivoli Workload Scheduler for
                 z/OS processing when system failures occur.




44   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 2-6 IBM Tivoli Workload Scheduler for z/OS automatic recovery and restart

Recovery of jobs and started tasks
Automatic recovery actions for failed jobs are specified in user-defined control
statements. Parameters in these statements determine the recovery actions to
be taken when a job or started task ends in error.

Restart and cleanup
Restart and cleanup are basically two tasks:
   Restarting an operation at the job level or step level
   Cleaning up the associated data sets

 Note: The IBM Tivoli Workload Scheduler for z/OS 8.2 restart and cleanup
 function has been updated and redesigned. Apply fix for APAR PQ79506 and
 PQ79507 to get the redesigned and updated function.

You can use restart and cleanup to catalog, uncatalog, or delete data sets when
a job ends in error or when you need to rerun a job. Dataset cleanup takes care
of JCL in the form of in-stream JCL, in-stream procedures, and cataloged
procedures on both local and remote systems. This function can be initiated
automatically by Tivoli Workload Scheduler for z/OS or manually by a user
through the panels. Tivoli Workload Scheduler for z/OS resets the catalog to the
status that it was before the job ran for both generation data set groups (GDGs)



                                        Chapter 2. End-to-end scheduling architecture   45
and for DD allocated data sets contained in JCL. In addition, restart and cleanup
                 supports the use of Removable Media Manager in your environment.

                 Restart at both the step level and job level are also provided in the IBM Tivoli
                 Workload Scheduler for z/OS legacy ISPF panels and in the JSC. It manages
                 resolution of generation data group (GDG) names, JCL-containing nested
                 INCLUDEs or PROC, and IF-THEN-ELSE statements. Tivoli Workload Scheduler
                 for z/OS also automatically identifies problems that can prevent successful
                 restart, providing a logic of the “best restart step.”

                 You can browse the job log or request a step-level restart for any z/OS job or
                 started task even when there are no catalog modifications. The job-log browse
                 functions are also available for the workload on other operating platforms, which
                 is especially useful for those environments that do not support a System Display
                 and Search Facility (SDSF) or something similar.

                 These facilities are available to you without the need to make changes to your
                 current JCL. Tivoli Workload Scheduler for z/OS gives you an enterprise-wide
                 data set cleanup capability on remote agent systems.

                 Production workload restart
                 Tivoli Workload Scheduler for z/OS provides a production workload restart, which
                 can automatically maintain the processing of your work if a system or connection
                 fails. Scheduler-controlled production work for the unsuccessful system is
                 rerouted to another system. Because Tivoli Workload Scheduler for z/OS can
                 restart and manage the production workload, the integrity of your processing
                 schedule is maintained, and service continues for your customers.

                 Tivoli Workload Scheduler for z/OS exploits the VTAM Model Application
                 Program Definition feature and the z/OS-defined symbols to ease the
                 configuration and job in a sysplex environment, giving the user a single-system
                 view of the sysplex.

                 Starting, stopping, and managing your engines and agents does not require you
                 to know which sysplex the z/OS image is actually running on.

                 z/OS Automatic Restart Manager support
                 In case of program failure, all of the scheduler components are enabled to be
                 restarted by the Automatic Restart Manager (ARM) of the z/OS operating
                 system.

                 Automatic status checking
                 To track the work flow, Tivoli Workload Scheduler for z/OS interfaces directly with
                 the operating system, collecting and analyzing status information about the
                 production work that is currently active in the system. Tivoli Workload Scheduler
                 for z/OS can record status information from both local and remote processors.


46   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
When status information is reported from remote sites in different time zones,
Tivoli Workload Scheduler for z/OS makes allowances for the time differences.

Status reporting from heterogeneous environments
The processing on other operating environments can also be tracked by Tivoli
Workload Scheduler for z/OS. You can use supplied programs to communicate
with the engine from any environment that can establish communications with a
z/OS system.

Status reporting from user programs
You can pass status information about production workload processing to Tivoli
Workload Scheduler for z/OS from your own user programs through a standard
supplied routine.

Additional job-completion checking
If required, Tivoli Workload Scheduler for z/OS provides further status checking
by scanning SYSOUT and other print data sets from your processing when the
success or failure of the processing cannot be determined by completion codes.
For example, Tivoli Workload Scheduler for z/OS can check the text of system
messages or messages originating from your user programs. Using information
contained in job completion checker (JCC) tables, Tivoli Workload Scheduler for
z/OS determines what actions to take when it finds certain text strings. These
actions can include:
   Reporting errors
   Re-queuing SYSOUT
   Writing incident records to an incident data set

Managing unplanned work
Tivoli Workload Scheduler for z/OS can be automatically triggered to update the
current plan with information about work that cannot be planned in advance. This
enables Tivoli Workload Scheduler for z/OS to control unexpected work.
Because it checks the processing status of this work, automatic recovery
facilities are also available.

Interfacing with other programs
Tivoli Workload Scheduler for z/OS provides a program interface (PIF) with which
you can automate most actions that you can perform online through the dialogs.
This interface can be called from CLISTs, user programs, and via TSO
commands.

The application programming interface (API) lets your programs communicate
with Tivoli Workload Scheduler for z/OS from any compliant platform. You can
use Common Programming Interface for Communications (CPI-C), advanced
program-to-program communication (APPC), or your own logical unit (LU) 6.2



                                      Chapter 2. End-to-end scheduling architecture   47
verbs to converse with Tivoli Workload Scheduler for z/OS through the API. You
                 can use this interface to query and update the current plan. The programs can be
                 running on any platform that is connected locally, or remotely through a network,
                 with the z/OS system where the engine runs.

                 Management of critical jobs
                 IBM Tivoli Workload Scheduler for z/OS exploits the capability of the Workload
                 Manager (WLM) component of z/OS to ensure that critical jobs are completed on
                 time. If a critical job is late, Tivoli Workload Scheduler for z/OS favors it using the
                 existing Workload Manager interface.

                 Security
                 Today, data processing operations increasingly require a high level of data
                 security, particularly as the scope of data processing operations expands and
                 more people within the enterprise become involved. Tivoli Workload Scheduler
                 for z/OS provides complete security and data integrity within the range of its
                 functions. It provides a shared central service to different user departments even
                 when the users are in different companies and countries, and a high level of
                 security to protect scheduler data and resources from unauthorized access. With
                 Tivoli Workload Scheduler for z/OS, you can easily organize, isolate, and protect
                 user data to safeguard the integrity of your end-user applications (Figure 2-7).
                 Tivoli Workload Scheduler for z/OS can plan and control the work of many user
                 groups and maintain complete control of access to data and services.




                 Figure 2-7 IBM Tivoli Workload Scheduler for z/OS security




48   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Audit trail
With the audit trail, you can define how you want IBM Tivoli Workload Scheduler
for z/OS to log accesses (both reads and updates) to scheduler resources.
Because it provides a history of changes to the databases, the audit trail can be
extremely useful for staff that works with debugging and problem determination.

A sample program is provided for reading audit-trail records. The program reads
the logs for a period that you specify and produces a report detailing changes
that have been made to scheduler resources.

System Authorization Facility (SAF)
IBM Tivoli Workload Scheduler for z/OS uses the system authorization facility, a
function of z/OS, to pass authorization verification requests to your security
system (for example, RACF®). This means that you can protect your scheduler
data objects with any security system that uses the SAF interface.

Protection of data and resources
Each user request to access a function or to access data is validated by SAF.
This is some of the information that can be protected:
   Calendars and periods
   Job stream names or job stream owner, by name
   Workstation, by name
   Job stream-specific data in the plan
   Operator instructions
   JCL

To support distributed, multi-user handling,Tivoli Workload Scheduler for z/OS
enables you to control the level of security that you want to implement, right down
to the level of individual records. You can define generic or specific RACF
resource names to extend the level of security checking.

If you have RACF Version 2 Release 1 installed, you can use the IBM Tivoli
Workload Scheduler for z/OS reserved resource class (IBMOPC) to manage
your Tivoli Workload Scheduler for z/OS security environment. This means that
you do not have to define your own resource class by modifying RACF and
restarting your system.

Data integrity during submission
Tivoli Workload Scheduler for z/OS can ensure the correct security environment
for each job it submits, regardless of whether the job is run on a local or a remote
system. Tivoli Workload Scheduler for z/OS enables you to create tailored
security profiles for individual jobs or groups of jobs.




                                       Chapter 2. End-to-end scheduling architecture   49
2.2 Tivoli Workload Scheduler architecture
                 Tivoli Workload Scheduler helps you plan every phase of production. During the
                 processing day, its production control programs manage the production
                 environment and automate most operator activities. Tivoli Workload Scheduler
                 prepares jobs for execution, resolves interdependencies, and launches and
                 tracks each job. Because jobs start running as soon as their dependencies are
                 satisfied, idle time is minimized and throughput is improved. Jobs never run out
                 of sequence. If a job ends in error, Tivoli Workload Scheduler handles the
                 recovery process with little or no operator intervention.

                 IBM Tivoli Workload Scheduler is composed of three major parts:
                     IBM Tivoli Workload Scheduler engine
                     The IBM Tivoli Workload Scheduler engine is installed on every
                     non-mainframe workstation in the scheduling network (UNIX, Windows, and
                     OS/400 computers). When the engine is installed on a workstation, it can be
                     configured to play a specific role in the scheduling network. For example, the
                     engine can be configured to be a master domain manager, a domain
                     manager, or a fault-tolerant agent. In an ordinary Tivoli Workload Scheduler
                     network, there is a single master domain manager at the top of the network.
                     However, in an end-to-end scheduling network, there is no master domain
                     manager. Instead, its functions are instead performed by the IBM Tivoli
                     Workload Scheduler for z/OS engine, installed on a mainframe. This is
                     discussed in more detail later in this chapter.
                     IBM Tivoli Workload Scheduler connector
                     The connector “connects” the Job Scheduling Console to Tivoli Workload
                     Scheduler, routing commands from JSC to the Tivoli Workload Scheduler
                     engine. In an ordinary IBM Tivoli Workload Scheduler network, the Tivoli
                     Workload Scheduler connector is usually installed on the master domain
                     manager. In an end-to-end scheduling network, there is no master domain
                     manager. so the connector is usually installed on the first-level domain
                     managers. The Tivoli Workload Scheduler connector can also be installed on
                     other domain managers or fault-tolerant agents in the network.
                     The connector software is installed on top of the Tivoli Management
                     Framework, which must be configured as a Tivoli Management Region server
                     or managed node. The connector software cannot be installed on a TMR
                     endpoint.
                     Job Scheduling Console (JSC)
                     JSC is the Java-based graphical user interface for the IBM Tivoli Workload
                     Scheduler suite. The Job Scheduling Console runs on any machine from
                     which you want to manage Tivoli Workload Scheduler plan and database
                     objects. It provides, through the Tivoli Workload Scheduler connector, the


50   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
functions of the command-line programs conman and composer. The Job
              Scheduling Console can be installed on a desktop workstation or laptop, as
              long as the JSC has a TCP/IP link with the machine running the Tivoli
              Workload Scheduler connector. Using the JSC, operators can schedule and
              administer Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS
              over the network.

           In the next sections, we provide an overview of the IBM Tivoli Workload
           Scheduler network and workstations, the topology that is used to describe the
           architecture in Tivoli Workload Scheduler, the Tivoli Workload Scheduler
           components, and the plan.


2.2.1 The IBM Tivoli Workload Scheduler network
           A Tivoli Workload Scheduler network is made up of the workstations, or CPUs,
           on which jobs and job streams are run.

           A Tivoli Workload Scheduler network contains at least one IBM Tivoli Workload
           Scheduler domain, the master domain, in which the master domain manager is
           the management hub. It is the master domain manager that manages the
           databases and it is from the master domain manager that you define new objects
           in the databases. Additional domains can be used to divide a widely distributed
           network into smaller, locally managed groups.

           In the simplest configuration, the master domain manager maintains direct
           communication with all of the workstations (fault-tolerant agents) in the Tivoli
           Workload Scheduler network. All workstations are in the same domain,
           MASTERDM (Figure 2-8).


           MASTERDM

                                               AIX
                                 Master
                                Domain
                                Manager




              FTA1           FTA2            FTA3                FTA4

                     Linux      OS/400           Windows XP             Solaris


           Figure 2-8 A sample IBM Tivoli Workload Scheduler network with only one domain




                                                     Chapter 2. End-to-end scheduling architecture   51
Using multiple domains reduces the amount of network traffic by reducing the
                 communications between the master domain manager and the other computers
                 in the network. Figure 2-9 depicts an example of a Tivoli Workload Scheduler
                 network with three domains. In this example, the master domain manager is
                 shown as an AIX system. The master domain manager does not have to be on
                 an AIX system; it can be installed on any of several different platforms, including
                 AIX, Linux, Solaris, HPUX, and Windows. Figure 2-9 is only an example that is
                 meant to give an idea of a typical Tivoli Workload Scheduler network.


                   MASTERDM

                                                          AIX
                                         Master
                                        Domain
                                        Manager



                   DomainA                                                 DomainB
                                       AIX
                                                                           HPUX
                          Domain                                Domain
                          Manager                               Manager
                           DMA                                   DMB




                     FTA1            FTA2                FTA3             FTA4

                            Linux            OS/400          Windows XP          Solaris


                 Figure 2-9 IBM Tivoli Workload Scheduler network with three domains

                 In this configuration, the master domain manager communicates directly only
                 with the subordinate domain managers. The subordinate domain managers
                 communicate with the workstations in their domains. In this way, the number of
                 connections from the master domain manager are reduced. Multiple domains
                 also provide fault-tolerance: If the link from the master is lost, a domain manager
                 can still manage the workstations in its domain and resolve dependencies
                 between them. This limits the impact of a network outage. Each domain may also
                 have one or more backup domain managers that can become the domain
                 manager for the domain if the domain manager fails.

                 Before the start of each day, the master domain manager creates a plan for the
                 next 24 hours. This plan is placed in a production control file, named Symphony.
                 Tivoli Workload Scheduler is then restarted throughout the network, and the
                 master domain manager sends a copy of the Symphony file to each of the


52   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
subordinate domain managers. Each domain manager then sends a copy of the
Symphony file to the fault-tolerant agents in that domain.

After the network has been started, scheduling events such as job starts and
completions are passed up from each workstation to its domain manager. The
domain manager updates its Symphony file with the events and then passes the
events up the network hierarchy to the master domain manager. The events are
then applied to the Symphony file on the master domain manager. Events from
all workstations in the network will be passed up to the master domain manager.
In this way, the master’s Symphony file contains the authoritative record of what
has happened during the production day. The master also broadcasts the
changes down throughout the network, updating the Symphony files of domain
managers and fault-tolerant agents that are running in full status mode.

It is important to remember that Tivoli Workload Scheduler does not limit the
number of domains or levels (the hirerarchy) in the network. There can be as
many levels of domains as is appropriate for a given computing environment. The
number of domains or levels in the network should be based on the topology of
the physical network where Tivoli Workload Scheduler is installed. Most often,
geographical boundaries are used to determine divisions between domains.

See 3.5.4, “Network planning and considerations” on page 141 for more
information about how to design an IBM Tivoli Workload Scheduler network.

Figure 2-10 on page 54 shows an example of a four-tier Tivoli Workload
Scheduler network:
1.   Master domain manager, MASTERDM
2.   DomainA and DomainB
3.   DomainC, DomainD, DomainE, FTA1, FTA2, and FTA3
4.   FTA4, FTA5, FTA6, FTA7, FTA8, and FTA9




                                      Chapter 2. End-to-end scheduling architecture   53
MASTERDM
                                                      Master             AIX
                                                     Domain
                                                     Manager



                   DomainA                                                                                DomainB
                                     Domain         AIX                        Domain           HPUX
                                     Manager                                   Manager
                                      DMA                                       DMB


                            FTA1                   FTA2                                                FTA3

                                   HPUX                   Solaris                                                AIX


                   DomainC                          DomainD                                               DomainE
                                          AIX                          AIX                                    Solaris
                             DMC                             DMD                                DME




                     FTA4            FTA5             FTA6            FTA7               FTA8            FTA9

                         Linux            OS/400             Win 2K          Win XP             AIX               HPUX


                 Figure 2-10 A multi-tiered IBM Tivoli Workload Scheduler network


2.2.2 Tivoli Workload Scheduler workstation types
                 For most cases, workstation definitions refer to physical workstations. However,
                 in the case of extended and network agents, the workstations are logical
                 definitions that must be hosted by a physical IBM Tivoli Workload Scheduler
                 workstation.

                 There are several different types of Tivoli Workload Scheduler workstations:
                     Master domain manager (MDM)
                     The domain manager of the topmost domain of a Tivoli Workload Scheduler
                     network. It contains the centralized database of all defined scheduling
                     objects, including all jobs and their dependencies. It creates the plan at the
                     start of each day, and performs all logging and reporting for the network. The
                     master distributes the plan to all subordinate domain managers and
                     fault-tolerant agents. In an end-to-end scheduling network, the IBM Tivoli
                     Workload Scheduler for z/OS engine (controller) acts as the master domain
                     manager.



54   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Domain manager (DM)
   The management hub in a domain. All communications to and from the
   agents in a domain are routed through the domain manager. The domain
   manager can resolve dependencies between jobs in its subordinate agents.
   The copy of the plan on the domain manager is updated with reporting and
   logging from the subordinate agents.
   Backup domain manager
   A fault-tolerant agent that is capable of assuming the responsibilities of its
   domain manager. The copy of the plan on the backup domain manager is
   updated with the same reporting and logging information as the domain
   manager plan.
   Fault-tolerant agent (FTA)
   A workstation that is capable of resolving local dependencies and launching
   its jobs in the absence of a domain manager. It has a local copy of the plan
   generated in the master domain manager. It is also called a fault tolerant
   workstation.
   Standard agent (SA)
   A workstation that launches jobs only under the direction of its domain
   manager.
   Extended agent (XA)
   A logical workstation definition that enables one to launch and control jobs on
   other systems and applications. IBM Tivoli Workload Scheduler for
   Applications includes extended agent methods for the following systems: SAP
   R/3, Oracle Applications, PeopleSoft, CA7, JES2, and JES3.

Figure 2-11 on page 56 shows a Tivoli Workload Scheduler network with some of
the different workstation types.

It is important to remember that domain manager FTAs, including the master
domain manager FTA and backup domain manager FTAs, are FTAs with some
extra responsibilities. The servers with these FTAs can, and most often will, be
servers where you run normal batch jobs that are scheduled and tracked by Tivoli
Workload Scheduler. This means that these servers do not have to be servers
dedicated only for Tivoli Workload Scheduler work. The servers can still do some
other work and run some other applications.

However, you should not choose to use one of your busiest servers as one of
your Tivoli Workload Scheduler domain managers of first-level.




                                       Chapter 2. End-to-end scheduling architecture   55
MASTERDM
                                                         Master             AIX
                                                        Domain
                                                        Manager




                   DomainA                                                                                       DomainB
                                      Domain           AIX                        Domain           HPUX
                                      Manager                                     Manager
                                       DMA                                         DMB
                                                                                                                                 Job
                                                                                                                              Scheduling
                                                                                                                               Console
                             FTA1                     FTA2                                                FTA3

                                    HPUX                     Solaris                                                 AIX



                   DomainC                             DomainD                                                   DomainE
                                                                                                                  Solaris
                                           AIX                            AIX
                               DMC                              DMD                                DME




                    FTA4              FTA5               FTA6            FTA7               FTA8             FTA9


                           Linux             OS/400             Win NT          Win 2K             AIX                 HPUX



                 Figure 2-11 IBM Tivoli Workload Scheduler network with different workstation types


2.2.3 Tivoli Workload Scheduler topology
                 The purpose of having multiple domains is to delegate some of the
                 responsibilities of the master domain manager and to provide extra fault
                 tolerance. Fault tolerance is enhanced because a domain manager can continue
                 to resolve dependencies within the domain even if the master domain manager is
                 temporarily unavailable.

                 Workstations are generally grouped into a domain because they share a
                 common set of characteristics. Most often, workstations will be grouped into a
                 domain because they are in close physical proximity to one another, such as in
                 the same office. Domains may also be based on organizational unit (for example,
                 department), business function, or application. Grouping related workstations in
                 a domain reduces the amount of information that must be communicated
                 between domains, and thereby reduces the amount of network traffic generated.

                 In 3.5.4, “Network planning and considerations” on page 141, you can find more
                 information about how to configure an IBM Tivoli Workload Scheduler network
                 based on your particular distributed network and environment.




56   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
2.2.4 IBM Tivoli Workload Scheduler components
          Tivoli Workload Scheduler is comprised of several separate programs, each with
          a distinct function. This division of labor segregates networking, dependency
          resolution, and job launching into their own individual processes. These
          processes communicate among themselves through the use of message files
          (also called event files). Every event that occurs during the production day is
          handled by passing events between processes through the message files.

          A computer running Tivoli Workload Scheduler has several active IBM Tivoli
          Workload Scheduler processes. They are started as a system service, by the
          StartUp command, or manually from the Job Scheduling Console. The main
          processes are:
          netman                The network listener program, which initially receives all
                                TCP connections. The netman program accepts an
                                incoming request from a remote program, spawns a new
                                process to handle the request, and if necessary hands the
                                socket over to the new process.
          writer                The network writer process that passes incoming
                                messages from a remote workstation to the local mailman
                                process (via the Mailbox.msg event file).
          mailman               The primary message management process. The
                                mailman program reads events from the Mailbox.msg file
                                and then either passes them to batchman (via the
                                Intercom.msg event file) or sends them to a remote
                                workstation.
          batchman              The production control process. Working with the plan
                                (Symphony), batchman starts jobs streams, resolves
                                dependencies, and directs jobman to launch jobs. After
                                the Symphony file has been created (at the beginning of
                                the production day), batchman is the only program that
                                makes changes to the Symphony file.
          jobman                The job control process. The jobman program launches
                                and monitors jobs.

          Figure 2-12 on page 58 shows the IBM Tivoli Workload Scheduler processes and
          their intercommunication via message files.




                                               Chapter 2. End-to-end scheduling architecture   57
TWS connector        Symphony &
     Operator input     User interface                                         TWS processes
                                           programs           message files

       stop, start         conman                              NetReq.msg         netman
         & shut




                                                                                                 mailman
                                                                                                 remote
                            JSC          maestro_engine        Symphony            writer




                                                                                                 remote
                                                                                                  writer
      Changes to           conman                              Mailbox.msg        mailman
      Symphony



                            JSC           maestro_plan        Intercom.msg       batchman



                                                               Courier.msg        jobman




Figure 2-12 IBM Tivoli Workload Scheduler interprocess communication


2.2.5 IBM Tivoli Workload Scheduler plan
                      The IBM Tivoli Workload Scheduler plan is the to-do list that tells Tivoli Workload
                      Scheduler what jobs to run and what dependencies must be satisfied before
                      each job is launched. The plan usually covers 24 hours; this period is sometimes
                      referred to as the production day and can start at any point in the day. The best
                      time of day to create a new plan is a time when few or no jobs are expected to be
                      running. A new plan is created at the start of the production day.

                      After the plan has been created, a copy is sent to all subordinate workstations.
                      The domain managers then distribute the plan to their fault-tolerant agent.

                      The subordinate domain managers distribute their copy to all of the fault-tolerant
                      agents in their domain and to all domain managers that are subordinate to them,
                      and so on down the line. This enables fault-tolerant agents throughout the
                      network to continue processing even if the network connection to their domain
                      manager is down. From the Job Scheduling Console or the command line
                      interface, the operator can view and make changes in the day’s production by
                      making changes in the Symphony file.

                      Figure 2-13 on page 59 shows the distribution of the Symphony file from master
                      domain manager to domain managers and their subordinate agents.




58      End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
MASTERDM
                                                            AIX
                                          Master
                                         Domain
                                         Manager
                                                              TWS plan




          DomainA                                                                       DomainB

                                    AIX                                              HPUX
                      Domain                                        Domain
                      Manager                                       Manager
                       DMA                                           DMB




               FTA1               FTA2                     FTA3                     FTA4

                       AIX                OS/400                  Windows XP                Solaris




         Figure 2-13 Distribution of plan (Symphony file) in a Tivoli Workload Scheduler network

         IBM Tivoli Workload Scheduler processes monitor the Symphony file and make
         calls to the operating system to launch jobs as required. The operating system
         runs the job, and in return informs IBM Tivoli Workload Scheduler whether the
         job has completed successfully or not. This information is entered into the
         Symphony file to indicate the status of the job. This way the Symphony file is
         continuously updated with the status of all jobs: the work that needs to be done,
         the work in progress, and the work that has been completed.



2.3 End-to-end scheduling architecture
         In the two previous sections, 2.2, “Tivoli Workload Scheduler architecture” on
         page 50, and 2.1, “IBM Tivoli Workload Scheduler for z/OS architecture” on
         page 27, we described the architecture of Tivoli Workload Scheduler and Tivoli
         Workload Scheduler for z/OS. In this section, we bring the two together; here we
         describe how the programs work together to function as a unified end-to-end
         scheduling solution.

         End-to-end scheduling makes it possible to schedule and control jobs on
         mainframe, Windows, and UNIX environments, providing truly distributed
         scheduling. In the end-to-end configuration, Tivoli Workload Scheduler for z/OS



                                                   Chapter 2. End-to-end scheduling architecture      59
is used as the planner for the job scheduling environment. Tivoli Workload
                 Scheduler domain managers and fault-tolerant agents are used to schedule on
                 the non-mainframe platforms, such as UNIX and Windows.


2.3.1 How end-to-end scheduling works
                 End-to-end scheduling means controlling scheduling from one end of an
                 enterprise to the other — from the mainframe all the way down to the client
                 workstation. Tivoli Workload Scheduler provides an end-to-end scheduling
                 solution whereby one or more IBM Tivoli Workload Scheduler domain managers,
                 and its underlying agents and domains, are put under the direct control of an IBM
                 Tivoli Workload Scheduler for z/OS engine. To the domain managers and FTAs in
                 the network, the IBM Tivoli Workload Scheduler for z/OS engine appears to be
                 the master domain manager.

                 Tivoli Workload Scheduler for z/OS creates the plan (the Symphony file) for the
                 Tivoli Workload Scheduler network and sends the plan down to the first-level
                 domain managers. Each of these domain managers sends the plan to all of the
                 subordinate workstations in its domain.

                 The domain managers act as brokers for the distributed network by resolving all
                 dependencies for the subordinate managers and agents. They send their
                 updates (in the form of events) to Tivoli Workload Scheduler for z/OS, which
                 updates the plan accordingly. Tivoli Workload Scheduler for z/OS handles its own
                 jobs and notifies the domain managers of all the status changes of its jobs that
                 involve the IBM Tivoli Workload Scheduler plan. In this configuration, the domain
                 manager and all the Tivoli Workload Scheduler workstations recognize Tivoli
                 Workload Scheduler for z/OS as the master domain manager and notify it of all of
                 the changes occurring in their own plans. At the same time, the agents are not
                 permitted to interfere with the Tivoli Workload Scheduler for z/OS jobs, because
                 they are viewed as running on the master that is the only node that is in charge of
                 them.

                 In Figure 2-14 on page 61, you can see a Tivoli Workload Scheduler network
                 managed by a Tivoli Workload Scheduler for z/OS engine. This is accomplished
                 by connecting a Tivoli Workload Scheduler domain manager directly to the Tivoli
                 Workload Scheduler for z/OS engine. Actually, if you compare Figure 2-9 on
                 page 52 with Figure 2-14 on page 61, you will see that the Tivoli Workload
                 Scheduler network that is connected to Tivoli Workload Scheduler for z/OS is
                 managed by a Tivoli Workload Scheduler master domain manager. When
                 connecting this network to the engine, the AIX server that was acting as the Tivoli
                 Workload Scheduler master domain manager is replaced by a mainframe. The
                 new master domain manager is the Tivoli Workload Scheduler for z/OS engine.




60   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
MASTERDM




                                            z/OS
                      Master Domain
                        Manager
                      OPCMASTER


                                                                   TWS for z/OS Engine


 DomainA                                                                 Controller
                                                                         DomainB


                          AIX                                              Server
                                                                 HPUX
            Domain                                  Domain
            Manager                                 Manager
             DMA                                     DMB




     FTA1               FTA2               FTA3                 FTA4


            Linux               OS/400            Windows XP            Solaris



Figure 2-14 IBM Tivoli Workload Scheduler for z/OS end-to-end scheduling

In Tivoli Workload Scheduler for z/OS, you can access job streams (also known
as schedules in Tivoli Workload Scheduler and applications in Tivoli Workload
Scheduler for z/OS) and add them to the current plan in Tivoli Workload
Scheduler for z/OS. In addition, you can build dependencies among Tivoli
Workload Scheduler for z/OS job streams and Tivoli Workload Scheduler jobs.
From Tivoli Workload Scheduler for z/OS, you can monitor and control the FTAs.

In the Tivoli Workload Scheduler for z/OS current plan, you can specify jobs to
run on workstations in the Tivoli Workload Scheduler network. The Tivoli
Workload Scheduler for z/OS engine passes the job information to the
Symphony file in the Tivoli Workload Scheduler for z/OS server, which in turn
passes the Symphony file to the first-level Tivoli Workload Scheduler domain
managers to distribute and process. In turn, Tivoli Workload Scheduler reports
the status of running and completed jobs back to the current plan for monitoring
in the Tivoli Workload Scheduler for z/OS engine.

The IBM Tivoli Workload Scheduler for z/OS engine is comprised of two
components (started tasks on the mainframe): the controller and the server (also
called the end-to-end server).



                                         Chapter 2. End-to-end scheduling architecture   61
2.3.2 Tivoli Workload Scheduler for z/OS end-to-end components
                 To run the Tivoli Workload Scheduler for z/OS end-to-end, you must have a Tivoli
                 Workload Scheduler for z/OS server started task dedicated for end-to-end
                 scheduling. It is also possible to use the same server to communicate with the
                 Job Scheduling Console. The Tivoli Workload Scheduler for z/OS uses TCP/IP
                 for communication.

                 The Tivoli Workload Scheduler for z/OS controller uses the end-to-end server to
                 communicate events to the FTAs. The end-to-end server will start multiple tasks
                 and processes using the z/OS UNIX System Services (USS).

                 The Tivoli Workload Scheduler for z/OS end-to-end server must run on the same
                 z/OS systems where the served Tivoli Workload Scheduler for z/OS controller is
                 started and active.

                 Tivoli Workload Scheduler for z/OS end-to-end scheduling is comprised of three
                 major components:
                     The IBM Tivoli Workload Scheduler for z/OS controller: Manages database
                     objects, creates plans with the workload, and executes and monitors the
                     workload in the plan.
                     The IBM Tivoli Workload Scheduler for z/OS server: Acts as the Tivoli
                     Workload Scheduler master domain manager. It receives a part of the current
                     plan (the Symphony file) from the Tivoli Workload Scheduler for z/OS
                     controller, which contains job and job streams to be executed in the Tivoli
                     Workload Scheduler network. The server is the focal point for all
                     communication to and from the Tivoli Workload Scheduler network.
                     IBM Tivoli Workload Scheduler domain managers at the first level: Serve as
                     the communication hub between the Tivoli Workload Scheduler for z/OS
                     server and the distributed Tivoli Workload Scheduler network. The domain
                     managers at first level are connected directly to the Tivoli Workload Scheduler
                     master domain manager running in USS in the Tivoli Workload Scheduler for
                     z/OS end-to-end server.
                     In Tivoli Workload Scheduler for z/OS 8.2, you can have one or several Tivoli
                     Workload Scheduler domain managers at the first level. These domain
                     managers are connected directly to the Tivoli Workload Scheduler for z/OS
                     end-to-end server, so they are called first-level domain managers.
                     It is possible to designate Tivoli Workload Scheduler for z/OS backup domain
                     managers for the first-level Tivoli Workload Scheduler domain managers (as it
                     is for “normal” Tivoli Workload Scheduler fault-tolerant agents and domain
                     managers).




62   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Detailed description of the communication
                 Figure 2-15 shows the communication between the Tivoli Workload Scheduler
                 for z/OS controller and the Tivoli Workload Scheduler for z/OS server.


  TWS for z/OS Engine
   TWS for z/OS Controller       TWS for z/OS Server programs running in USS
                                                                                    Symphony &         TWS
                                      translator threads
                                                                                    message files   processes
                                                                     start & stop
                                                                     events only    NetReq.msg       netman
                             TWSCS
      GS
     GS                                         translator                                            spawns writer
               end-to-end




                                                                                                                      mailman
                                                                                                                      remote
                enabler
     WA                                           output                             Symphony        writer
                 sender      TWSOU              translator
                 subtask
                                              spawns threads
     NMM
                receiver




                                                                                                                      remote
                                                                                                                       writer
                                          input           input                     Mailbox.msg     mailman
                subtask      TWSIN        writer       translator
     EM

                                          job log       script
                                         retriever    downloader                    Intercom.msg    batchman



                                                                                    tomaster.msg




                                         remote            remote
                                         scribner          dwnldr

Figure 2-15 IBM Tivoli Workload Scheduler for z/OS 8.2 interprocess communication

                 Tivoli Workload Scheduler for z/OS server processes and tasks
                 The end-to-end server address space hosts the tasks and the data sets that
                 function as the intermediaries between the controller and the domain managers
                 of first level. In many cases, these tasks and data sets are replicas of the
                 distributed Tivoli Workload Scheduler processes and files.

                 The Tivoli Workload Scheduler for z/OS server uses the following processes,
                 threads, and tasks for end-to-end scheduling (see Figure 2-15):
                 netman                  The Tivoli Workload Scheduler network listener daemon.
                                         It is started automatically when the end-to-end server task
                                         starts. The netman process monitors the NetReq.msg
                                         queue and listens to the TCP port defined in the server
                                         topology portnumber parameter. (Default is port 31111.)
                                         When netman receives a request, it starts another
                                         program to handle the request, usually writer or mailman.
                                         Requests to start or stop mailman are written by output


                                                                    Chapter 2. End-to-end scheduling architecture          63
translator to the NetReq.msg queue. Requests to start or
                                            stop writer are sent via TCP by the mailman process on a
                                            remote workstation (domain manager at the first level).
                 writer                     One writer process is started by netman for each
                                            connected remote workstation (domain manager at the
                                            first level). Each writer process receives events from the
                                            mailman process on a remote workstation and writes
                                            these events to the Mailbox.msg file.
                 mailman                    The main message handler process. Its main tasks are:
                                              Routing events. It reads the events stored in the
                                              Mailbox.msg queue and sends them either to the
                                              controller (writing them in the Intercom.msg file), or to
                                              the writer process on a remote workstation (via TCP).
                                              Linking to remote workstations (domain managers at
                                              the first level). The mailman process requests that the
                                              netman program on each remote workstation starts a
                                              writer process to accept the connection.
                                              Sending the Symphony file to subordinate workstations
                                              (domain managers at the first level). When a new
                                              Symphony file is created, the mailman process sends a
                                              copy of the file to each subordinate domain manager
                                              and fault-tolerant agent.
                 batchman                   Updates the Symphony file and resolves dependencies at
                                            master level. After the Symphony file has been written the
                                            first time, batchman is the only program that makes
                                            changes to the file.
                                            The batchman program in USS does not perform job
                                            submission; this is why there is no jobman process
                                            running in UNIX System Services).
                 translator                 Through its input and output threads (discussed in more
                                            detail below), the translator process translates events
                                            from Tivoli Workload Scheduler format to Tivoli Workload
                                            Scheduler for z/OS format and vice versa. The translator
                                            program was developed specifically to handle the job of
                                            event translation from OPC events to Maestro events, and
                                            vice versa. The translator process runs in UNIX System
                                            Services on the mainframe; it does not run on domain
                                            managers or FTAs. The translator program provides the
                                            glue that binds Tivoli Workload Scheduler for z/OS and
                                            Tivoli Workload Scheduler together; translator enables




64   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
these two products to function as a unified scheduling
                     system.
job log retriever    A thread of the translator process that is spawned to fetch
                     a job log from a fault-tolerant agent. One job log retriever
                     thread is spawned for each requested FTA job log.
                     The job log retriever receives the log, sizes it according to
                     the LOGLINES parameter, translates it from UTF-8 to
                     EBCDIC, and queues it in the inbound queue of the
                     controller. The retrieval of a job log is a lengthy operation
                     and can take a few moments to complete.
                     The user may request several logs at the same time. The
                     job log retriever thread terminates after the log has been
                     written to the inbound queue. If using the IBM Tivoli
                     Workload Scheduler for z/OS ISPF panel interface, the
                     user will be notified by a message when the job log has
                     been received.
script downloader    A thread of the translator process that is spawned to
                     download the script for an operation (job) defined in Tivoli
                     Workload Scheduler with the Centralized Script option set
                     to Yes. One script downloader thread is spawned for each
                     script that must be downloaded. Several script
                     downloader threads can be active at the same time. The
                     script that is to be downloaded is received from the output
                     translator.
starter              The basic or main process in the end-to-end server UNIX
                     System Services. The starter process is the first process
                     that is started in UNIX System Services when the
                     end-to-end server started task is started. The starter
                     process starts the translator and the netman processes
                     (not shown in Figure 2-15 on page 63).
Events passed from the server to the controller
input translator     A thread of the translator process. The input translator
                     thread reads events from the tomaster.msg file and
                     translates them from Tivoli Workload Scheduler format to
                     Tivoli Workload Scheduler for z/OS format. It also
                     performs UTF-8 to EBCDIC translation and sends the
                     translated events to the input writer.
input writer         Receives the input from the job log retriever, input
                     translator, and script downloader and writes it in the
                     inbound queue (the EQQTWSIN data set).




                                    Chapter 2. End-to-end scheduling architecture   65
receiver subtask           A subtask of the end-to-end task run in the Tivoli
                                            Workload Scheduler for z/OS controller. It receives events
                                            from the inbound queue and queues them to the Event
                                            Manager task. The events have already been filtered and
                                            elaborated by the input translator.
                 Events passed from the controller to the server
                 sender subtask             A subtask of the end-to-end task in the Tivoli Workload
                                            Scheduler for z/OS controller. It receives events for
                                            changes to the current plan that is related to Tivoli
                                            Workload Scheduler fault-tolerant agents. The Tivoli
                                            Workload Scheduler for z/OS tasks that can change the
                                            current plan are: General Service (GS), Normal Mode
                                            Manager (NMM), Event Manager (EM), and Workstation
                                            Analyzer (WA).
                                            The events are received via SSI, the usual method the
                                            Tivoli Workload Scheduler for z/OS tasks use to
                                            exchanged events.
                                           The NMM sends events to the sender task when the plan
                                           is extended or replanned for synchronization purposes.
                 output translator         A thread of the translator process. The output translator
                                           thread reads events from the outbound queue. It translates
                                           the events from Tivoli Workload Scheduler for z/OS format
                                           to Tivoli Workload Scheduler format and evaluates them,
                                           performing the appropriate function. Most events, including
                                           those related to changes to the Symphony file, are written
                                           to Mailbox.msg. Requests to start or stop netman or
                                           mailman are written to NetReq.msg. Output translator also
                                           translates events from EBCDIC to UTF-8.
                                           The output translator interacts with three different
                                           components, depending on the type of the event:
                                              Starts a job log retriever thread if the event is to retrieve
                                              the log of a job from a Tivoli Workload Scheduler agent.
                                              Starts a script downloader thread if the event is to
                                              download the script.
                                              Queues an event in NetReq.msg if the event is to start
                                              or stop mailman.
                                              Queues events in Mailbox.msg for the other events that
                                              are sent to update the Symphony file on the Tivoli
                                              Workload Scheduler agents (for example, events for a
                                              job that has changed status, events for manual changes



66   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
on jobs or workstations, or events to link or unlink
                        workstations).
                        Switches the Symphony files.

IBM Tivoli Workload Scheduler for z/OS datasets and files used for
end-to-end scheduling

The Tivoli Workload Scheduler for z/OS server and controller uses the following
data sets and files for end-to-end scheduling:
EQQTWSIN              Sequential data set used to queue events sent by the
                      server to the controller (the inbound queue). Must be
                      defined in Tivoli Workload Scheduler for z/OS controller
                      and the end-to-end server started task procedure (shown
                      as TWSIN in Figure 2-15 on page 63).
EQQTWSOU              Sequential data set used to queue events sent by the
                      controller to the server (the outbound queue). Must be
                      defined in Tivoli Workload Scheduler for z/OS controller
                      and the end-to-end server started task procedure (shown
                      as TWSOU in Figure 2-15 on page 63).
EQQTWSCS              Partitioned data set used to temporarily store a script
                      when it is downloaded from the Tivoli Workload Scheduler
                      for z/OS JOBLIB data set to the fault-tolerant agent for its
                      submission. This data set is shown as TWSCS in
                      Figure 2-15 on page 63.
                      This data set is described in “Tivoli Workload Scheduler
                      for z/OS end-to-end database objects” on page 69. It is
                      not shown in Figure 2-15 on page 63.
Symphony              HFS file containing the active copy of the plan used by the
                      distributed Tivoli Workload Scheduler agents.
Sinfonia              HFS file containing the distribution copy of the plan used
                      by the distributed Tivoli Workload Scheduler agents. This
                      file is not shown in Figure 2-15 on page 63.
NetReq.msg            HFS file used to queue requests for the netman process.
Mailbox.msg           HFS file used to queue events sent to the mailman
                      process.
intercom.msg          HFS file used to queue events sent to the batchman
                      process.
tomaster.msg          HFS file used to queue events sent to the input translator
                      process.




                                     Chapter 2. End-to-end scheduling architecture   67
Translator.chk             HFS file used as checkpoint file for the translator process.
                                            It is equivalent to the checkpoint data set used by the
                                            Tivoli Workload Scheduler for z/OS controller. For
                                            example, it contains information about the status of the
                                            Tivoli Workload Scheduler for z/OS current plan,
                                            Symphony run number, Symphony availability. This file is
                                            not shown in Figure 2-15 on page 63.
                 Translator.wjl             HFS file used to store information about job log retrieval
                                            and script downloading that are in progress. At
                                            initialization, the translator checks the translator.wjl file for
                                            job log retrieval and script downloading that did not
                                            complete (both correctly or in error) and sends the error
                                            back to the controller. This file is not shown in Figure 2-15
                                            on page 63.
                 EQQSCLIB                   Partitioned data set used as a repository for jobs with
                                            non-centralized script definitions running on FTAs. The
                                            EQQSCLIB data set is described in “Tivoli Workload
                                            Scheduler for z/OS end-to-end database objects” on
                                            page 69. It is not shown in Figure 2-15 on page 63.
                 EQQSCPDS                   VSAM data sets containing a copy of the current plan
                                            used by the daily plan batch programs to create the
                                            Symphony file.
                                            The end-to-end plan creating process is described in
                                            2.3.4, “Tivoli Workload Scheduler for z/OS end-to-end
                                            plans” on page 75. It is not shown in Figure 2-15 on
                                            page 63.


2.3.3 Tivoli Workload Scheduler for z/OS end-to-end configuration
                 The topology of the distributed IBM Tivoli Workload Scheduler network that is
                 connected to the IBM Tivoli Workload Scheduler for z/OS engine is described in
                 parameter statements for the Tivoli Workload Scheduler for z/OS server and for
                 the Tivoli Workload Scheduler for z/OS programs that handle the long-term plan
                 and the current plan.

                 Parameter statements are also used to activate the end-to-end subtasks in the
                 Tivoli Workload Scheduler for z/OS controller.

                 The parameter statements that are used to describe the topology is covered in
                 4.2.6, “Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end
                 scheduling” on page 174. This section also includes an example of how to reflect
                 a specific Tivoli Workload Scheduler network topology in Tivoli Workload




68   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Scheduler for z/OS servers and plan programs using the Tivoli Workload
Scheduler for z/OS topology parameter statements.

Tivoli Workload Scheduler for z/OS end-to-end database
objects
In order to run jobs on fault-tolerant agents or extended agents, one must first
define database objects related to the Tivoli Workload Scheduler workload in
Tivoli Workload Scheduler for z/OS databases.

The Tivoli Workload Scheduler for z/OS end-to-end related database objects are:
   IBM Tivoli Workload Scheduler for z/OS fault tolerant workstations
   A fault tolerant workstation is a computer workstation configured to schedule
   jobs on FTAs. The workstation must also be defined in the server CPUREC
   initialization statement (see Figure 2-16 on page 70).
   IBM Tivoli Workload Scheduler for z/OS job streams, jobs, and dependencies
   Job streams and jobs to run on Tivoli Workload Scheduler FTAs are defined
   like other job streams and jobs in Tivoli Workload Scheduler for z/OS. To run
   a job on a Tivoli Workload Scheduler FTA, the job is simply defined on a fault
   tolerant workstation. Dependencies between Tivoli Workload Scheduler
   distributed jobs are created exactly the same way as other job dependencies
   in the Tivoli Workload Scheduler for z/OS controller. This is also the case
   when creating dependencies between Tivoli Workload Scheduler distributed
   jobs and Tivoli Workload Scheduler for z/OS mainframe jobs.
   Some of the Tivoli Workload Scheduler for z/OS mainframe-specific options
   are not available for Tivoli Workload Scheduler distributed jobs.




                                       Chapter 2. End-to-end scheduling architecture   69
F100 workstation definition in ISPF:


                                                                      Topology definition for F100 workstation:




                    F100 workstation definition in JSC:




                 Figure 2-16 A workstation definition and its corresponding CPUREC

                     IBM Tivoli Workload Scheduler for z/OS resources
                     Only global resources are supported and can be used for Tivoli Workload
                     Scheduler distributed jobs. This means that the resource dependency is
                     resolved by the Tivoli Workload Scheduler for z/OS controller and not locally
                     on the FTA.
                     For a job running on an FTA, the use of resources causes the loss of fault
                     tolerance. Only the controller determines the availability of a resource and
                     consequently lets the FTA start the job. Thus, if a job running on an FTA uses
                     a resource, the following occurs:
                     – When the resource is available, the controller sets the state of the job to
                       started and the extended status to waiting for submission.
                     – The controller sends a release-dependency event to the FTA.
                     – The FTA starts the job.
                     If the connection between the engine and the FTA is broken, the operation
                     does not start on the FTA even if the resource becomes available.




70   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Note: Special resource dependencies are represented differently
 depending on whether you are looking at the job through Tivoli Workload
 Scheduler for z/OS interfaces or Tivoli Workload Scheduler interfaces. If
 you observe the job using Tivoli Workload Scheduler for z/OS interfaces,
 you can see the resource dependencies as expected.

 However, when you monitor a job on a fault-tolerant agent by means of the
 Tivoli Workload Scheduler interfaces, you will not be able to see the
 resource that is used by the job. Instead you will see a dependency on a
 job called OPCMASTER#GLOBAL.SPECIAL_RESOURCES. This dependency is set
 by the engine. Every job that has special resource dependencies has a
 dependency to this job.

 When the engine allocates the resource for the job, the dependency is
 released. (The engine sends a release event for the specific job through
 the network.)


The task or script associated with the FTA job, defined in Tivoli Workload
Scheduler for z/OS
In IBM Tivoli Workload Scheduler for z/OS 8.2, the task or script associated to
the FTA job can be defined in two different ways:
a. Non-centralized script
   Defined in a special partitioned data set, EQQSCLIB, allocated in the
   Tivoli Workload Scheduler for z/OS controller started task procedure,
   stores the job or task definitions for FTA jobs. The script (the JCL) resides
   on the fault-tolerant agent. This is the default behavior in Tivoli Workload
   Scheduler for z/OS for fault-tolerant agent jobs.
b. Centralized script
   Defines the job in Tivoli Workload Scheduler for z/OS with the Centralized
   Script option set to Y (Yes).

    Note: The default for all operations and jobs in Tivoli Workload
    Scheduler for z/OS is N (No).

   A centralized script resides in the IBM Tivoli Workload Scheduler for z/OS
   JOBLIB and is downloaded to the fault-tolerant agent every time the job is
   submitted. The concept of centralized script has been added for
   compatibility with the way that Tivoli Workload Scheduler for z/OS
   manages jobs in the z/OS environment.




                                   Chapter 2. End-to-end scheduling architecture   71
Non-centralized script
                 For every FTA job definition in Tivoli Workload Scheduler for z/OS where the
                 centralized script option is set to N (non-centralized script) there must be a
                 corresponding member in the EQQSCLIB data set. The members of EQQSCLIB
                 contain a JOBREC statement that describes the path to the job or the command
                 to be executed and eventually the user to be used when the job or command is
                 executed.

                 Example for a UNIX script:
                     JOBREC JOBSCR(/Tivoli/tws/scripts/script001_accounting) JOBUSR(userid01)

                 Example for a UNIX command:
                     JOBREC JOBCMD(ls) JOBUSR(userid01)

                 If the JOBUSR (user for the job) keyword is not specified, the user defined in the
                 CPUUSER keyword of the CPUREC statement for the fault-tolerant workstation
                 is used.

                 If necessary, Tivoli Workload Scheduler for z/OS JCL variables can be used in
                 the JOBREC definition. Tivoli Workload Scheduler for z/OS JCL variables and
                 variable substitution in a EQQSCLIB member is managed and controlled by
                 VARSUB statements placed directly in the EQQSCLIB member with the
                 JOBREC definition for the particular job.

                 Furthermore, it is possible to define Tivoli Workload Scheduler recovery options
                 for the job defined in the JOBREC statement. Tivoli Workload Scheduler
                 recovery options are defined with RECOVERY statements placed directly in the
                 EQQSCLIB member with the JOBREC definition for the particular job.

                 The JOBREC (and optionally VARSUB and RECOVERY) definitions are read by
                 the Tivoli Workload Scheduler for z/OS plan programs when producing the new
                 current plan and placed as part of the job definition in the Symphony file.

                 If a Tivoli Workload Scheduler distributed job stream is added to the plan in Tivoli
                 Workload Scheduler for z/OS, the JOBREC definition will be read by Tivoli
                 Workload Scheduler for z/OS, copied to the Symphony file on the Tivoli Workload
                 Scheduler for z/OS server, and sent (as events) by the server to the Tivoli
                 Workload Scheduler agent Symphony files via the directly connected Tivoli
                 Workload Scheduler domain managers.

                 It is important to remember that the EQQSCLIB member only has a pointer (the
                 path) to the job that is going to be executed. The actual job (the JCL) is placed
                 locally on the FTA or workstation in the directory defined by the JOBREC
                 JOBSCR definition.




72   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
This also means that it is not possible to use the JCL edit function in Tivoli
Workload Scheduler for z/OS to edit the script (the JCL) for jobs where the script
(the pointer) is defined by a JOBREC statement in the EQQSCLIB data set.

Centralized script
Script for a job defined with centralized script option set to Y must be defined in
Tivoli Workload Scheduler for z/OS JOBLIB. The script is defined the same way
as normal JCL.

It is possible (but not necessary) to define some parameters of the centralized
script, such as the user, in a job definition member of the SCRPTLIB data set.

With centralized scripts, you can perform variable substitution, automatic
recovery, JCL editing, and job setup (as for “normal” z/OS jobs defined in the
Tivoli Workload Scheduler for z/OS JOBLIB). It is also possible to use the
job-submit exit (EQQUX001).

Note that jobs with centralized script will be defined in the Symphony file with a
dependency named script. This dependency will be released when the job is
ready to run and the script is downloaded from the Tivoli Workload Scheduler for
z/OS controller to the fault-tolerant agent.

To download a centralized script, the DD statement EQQTWSCS must be
present in the controller and server started tasks. During the download the
<twshome>/centralized directory is created at the fault-tolerant workstation. The
script is downloaded to this directory. If an error occurs during this operation, the
controller retries the download every 30 seconds for a maximum of 10 times. If
the script download still fails after 10 retries, the job (operation) is marked as
Ended-in-error with error code OSUF.

Here are the detailed steps for downloading and executing centralized scripts on
FTAs (Figure 2-17 on page 75):
1. Tivoli Workload Scheduler for z/OS controller instructs sender subtask to
   begin script download.
2. The sender subtask writes the centralized script to the centralized scripts data
   set (EQQTWSCS).
3. The sender subtask writes a script download event (type JCL, action D) to the
   output queue (EQQTWSOU).
4. The output translator thread reads the JCL-D event from the output queue.
5. The output translator thread reads the script from the centralized scripts data
   set (EQQTWSCS).
6. The output translator thread spawns a script downloader thread.




                                        Chapter 2. End-to-end scheduling architecture   73
7. The script downloader thread connects directly to netman on the FTA where
                    the script will run.
                 8. netman spawns dwnldr and connects the socket from the script downloader
                    thread to the new dwnldr process.
                 9. dwnldr downloads the script from the script downloader thread and writes it to
                    the TWSHome/centralized directory on the FTA.
                 10.dwnldr notifies the script downloader thread of the result of the download.
                 11.The script downloader thread passes the result to the input writer thread.
                 12.If the script download was successful, the input writer thread writes a script
                    download successful event (type JCL, action C) on the input queue
                    (EQQTWSIN). If the script download was unsuccessful, the input writer
                    thread writes a a script download in error event (type JCL, action E) on the
                    input queue.
                 13.The receiver subtask reads the script download result event from the input
                    queue.
                 14.The receiver subtask notifies the Tivoli Workload Scheduler for z/OS
                    controller of the result of the script download. If the result of the script
                    download was successful, the OPC controller then sends a release
                    dependency event (type JCL, action R) to the FTA, via the normal IPC
                    channel (sender subtask → output queue → output translator →
                    Mailbox.msg → mailman → writer on FTA, and so on). This event causes the
                    job to run.




74   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
MASTERDM                 z/OS                         1                         3                 4
                                     OPC Controller           sender subtask               out                                   output translator
                  Master
                 Domain
                                                                                  2                             5
                                                      14                                   cs
                 Manager                                                                                                               6
                                                                                  13                12                      11
                                                              receiver subtask             in            input writer            script downloader



  DomainZ                  AIX
                 Domain
                 Manager
                  DMZ




  DomainA                                                                                                                            DomainB
                                                           HPUX                      10                                 7
       Domain       AIX              Domain
       Manager                       Manager
        DMA                           DMB




                                                                                       netman

    FTA1         FTA2             FTA3                FTA4                                      8
                                                                         dwnldr
           AIX      OS/400           Windows XP              Solaris

                                                                       myscript.sh     9


Figure 2-17 Steps and processes for downloading centralized script

                  Creating centralized script in the Tivoli Workload Scheduler for z/OS JOBLIB
                  data set is described in 4.5.2, “Definition of centralized scripts” on page 219.


2.3.4 Tivoli Workload Scheduler for z/OS end-to-end plans
                  When scheduling jobs in the Tivoli Workload Scheduler environment, current
                  plan processing also includes the automatic generation of the Symphony file that
                  goes to the IBM Tivoli Workload Scheduler for z/OS server and IBM Tivoli
                  Workload Scheduler subordinate domain managers as well as fault-tolerant
                  agents.

                  The Tivoli Workload Scheduler for z/OS current plan program is normally run on
                  workdays in the engine as described in 2.1.3, “Tivoli Workload Scheduler for
                  z/OS plans” on page 37.




                                                                          Chapter 2. End-to-end scheduling architecture                              75
Figure 2-18 shows a combined view of long-term planning and current planning.
                 Changes to the databases need an update of the long-term plan, thus most site
                 run the LTP Modify batch job immediately before extending the current plan.



                   Databases                                                Job
                                      Resources        Workstations                        Calendars          Periods
                                                                          Streams




                    Steps of plan                                              1. Extend long term plan
                    extension
                                                          2. Extend current plan


                                                                90 days                             1 workday

                   Plan                                                                                LTP
                                                           Long Term Plan
                                                                                                    extension

                                          today    tomorrow




                   Details of                                  Remove               Add detail
                   current plan          Old current         completed job           for next          New current
                   extension                plan               streams                 day                plan


                 Figure 2-18 Combined view of the long-term planning and current planning

                 If the end-to-end feature is activated in Tivoli Workload Scheduler for z/OS, the
                 current plan program will read the topology definitions described in the
                 TOPLOGY, DOMREC, CPUREC, and USRREC initialization statements (see
                 2.3.3, “Tivoli Workload Scheduler for z/OS end-to-end configuration” on page 68)
                 and the script library (EQQSCLIB) as part of the planning process. Information
                 from the initialization statements and the script library will be used to create a
                 Symphony file for the Tivoli Workload Scheduler FTAs (see Figure 2-19 on
                 page 77). The whole process is handled by Tivoli Workload Scheduler for z/OS
                 planning programs.




76   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Job
        Databases    Resources           Workstations
                                                               Streams




        Current
        Plan                                  Remove completed               Add detail
                      Old current plan
                                                 job streams                                    New current plan
        Extension                                                           for next day

        &
        Replan
                                              1.     Extract TWS plan form current plan
                                              2.     Add topology (domain, workstation)
                                              3.     Add task definition (path and user) for                  New Symphony
                                                     distributed TWS jobs




                                                   Script                         Topology
                                                   library                        Definitions




Figure 2-19 Creation of Symphony file in Tivoli Workload Scheduler for z/OS plan
programs

The process is handled by Tivoli Workload Scheduler for z/OS planning
programs, which is described in the next section.

Detailed description of the Symphony creation
Figure 2-15 on page 63 gives a description of the tasks and processes involved
in the Symphony creation.




                                                             Chapter 2. End-to-end scheduling architecture                   77
TWS for z/OS Engine
     TWS for z/OS Controller          TWS for z/OS Server programs running in USS
                                                                                         Symphony &         TWS
                                            translator threads
                                                                                         message files   processes
                                                                          start & stop
                                                                          events only    NetReq.msg       netman
                                 TWSCS
        GS                                            translator
       GS                                                                                                  spawns writer
                  end-to-end




                                                                                                                           mailman
                                                                                                                           remote
                   enabler
       WA                                               output                            Symphony        writer
                    sender       TWSOU                translator
                    subtask
                                                    spawns threads
       NMM
                    receiver




                                                                                                                           remote
                                                                                                                            writer
                                                input           input                    Mailbox.msg
                    subtask       TWSIN                                                                  mailman
                                                writer       translator
       EM

                                                job log       script
                                               retriever    downloader                   Intercom.msg    batchman



                                                                                         tomaster.msg




                                               remote            remote
                                               scribner          dwnldr

Figure 2-20 IBM Tivoli Workload Scheduler for z/OS 8.2 interprocess communication

                    1. The process is handled by Tivoli Workload Scheduler for z/OS planning batch
                       programs. The batch produces the NCP and initializes the symUSER.
                    2. The Normal Node Manager (NMM) sends the SYNC START ('S') event to the
                       server, and the E2E receiver starts, leaving all events in the inbound queue
                       (TWSIN).
                    3. When the SYNC START ('S') is processed by the output translator, it stops the
                       OPCMASTER, sends the SYNC END ('E') to the controller, and stops the
                       entire network.
                    4. The NMM applies the job tracking events received while the new plan was
                       produced. It then copies the new current plan data set (NCP) to the Tivoli
                       Workload Scheduler for z/OS current plan data set (CP1 or CP2), make a
                       current plan backup up (copies active CP1/CP2 to inactive CP1/CP2) and
                       creates the Symphony Current Plan (SCP) data set as a copy of the active
                       current plan (CP1 or CP2) data set.
                    5. Tivoli Workload Scheduler for z/OS mainframe schedule is resumed.
                    6. The end-to-end receiver begins to process events in the queue.




78      End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
7. The SYNC CPREADY ('Y') is sent to the output translator and starts, leaving
   in the outbound queue (TWSOU) all the events.
8. The plan program starts producing the SymUSER file starting from SCP and
   then renames it Symnew.
9. When the Symnew file has been created, the plan program ends and NMM
   notifies the output translator that the Symnew file is ready, sending the SYNC
   SYMREADY ('R') event to the output translator.
10.The output translator renames old Symphony and Sinfonia files to Symold
   and Sinfold files, and a Symphony OK ('X') or NOT OK ('B') Sync event is sent
   to the Tivoli Workload Scheduler for z/OS engine, which logs a message in
   the engine message log indicating whether the Symphony has been switched.
11.The Tivoli Workload Scheduler for z/OS server master is started in USS and
   the Input Translator starts to process new events. As in Tivoli Workload
   Scheduler distributed, mailman and batchman process events are left in local
   event files and start distributing the new Symphony file to the whole IBM Tivoli
   Workload Scheduler network.

When the Symphony file is created by the Tivoli Workload Scheduler for z/OS
plan programs, it (or, more precisely, the Sinfonia file) will be distributed to the
Tivoli Workload Scheduler for z/OS subordinate domain manager, which in turn
distributes the Symphony (Sinfonia) file to its subordinate domain managers and
fault-tolerant agents. (See Figure 2-21 on page 80.)




                                       Chapter 2. End-to-end scheduling architecture   79
MASTERDM                                   z/OS
                                          Master                     The TWS plan is extracted
                                         Domain                      from the TWS for z/OS plan
                                         Manager
                                                                TWS for           TWS plan
                                                               z/OS plan


                   DomainZ
                                                            AIX
                                          Domain                   The TWS plan is then distributed
                                          Manager                  to the subordinate DMs and FTAs
                                           DMZ
                                                              TWS plan



                   DomainA                                                          DomainB
                                         AIX                                         HPUX
                             Domain                                  Domain
                             Manager                                 Manager
                              DMA                                     DMB




                      FTA1             FTA2                 FTA3                     FTA4

                              AIX             OS/400              Windows 2000              Solaris


                 Figure 2-21 Symphony file distribution from ITWS for z/OS server to ITWS agents

                 The Symphony file is generated:
                     Every time the Tivoli Workload Scheduler for z/OS plan is extended or
                     replanned
                     When a Symphony renew batch job is submitted (from Tivoli Workload
                     Scheduler for z/OS legacy ISPF panels, option 3.5)

                 The Symphony file contains:
                     Jobs to be executed on Tivoli Workload Scheduler FTAs
                     z/OS (mainframe) jobs that are predecessors to Tivoli Workload Scheduler
                     distributed jobs
                     Job streams that have at least one job in the Symphony file
                     Topology information for the distributed network with all the workstation and
                     domain definitions, including the master domain manager of the distributed
                     network; that is, the Tivoli Workload Scheduler for z/OS host.




80   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
After the Symphony file is created and distributed to the Tivoli Workload
Scheduler FTAs, the Symphony file is updated by events:
   When job status changes
   When jobs or job streams are modified
   When jobs or job streams for the Tivoli Workload Scheduler FTAs are added
   to the plan in the Tivoli Workload Scheduler for z/OS controller.

If you look at the Symphony file locally on a Tivoli Workload Scheduler FTA, from
the Job Scheduling Console, or using the Tivoli Workload Scheduler command
line interface to the plan (conman), you will see that:
   The Tivoli Workload Scheduler workstation has the same name as the related
   workstation defined in Tivoli Workload Scheduler for z/OS for the agent.
   OPCMASTER is the hard-coded name for the master domain manager
   workstation for the Tivoli Workload Scheduler for z/OS controller.
   The name of the job stream (or schedule) is the hexadecimal representation
   of the occurrence (job stream instance) token (internal unique and invariant
   identifier for occurrences). The job streams are always defined on the
   OPCMASTER workstation. (Having no dependencies, this does not reduce
   fault tolerance.) See Figure 2-22 on page 82.
   Using this hexadecimal representation for the job stream instances makes it
   possible to have several instances for the same job stream, because they
   have unique job stream names. Therefore, it is possible to have a plan in the
   Tivoli Workload Scheduler for z/OS controller and a distributed Symphony file
   that spans more than 24 hours.

    Note: In Tivoli Workload Scheduler for z/OS, the key in the plan for an
    occurrence is job stream name and input arrival time.

    In the Symphony file, the key is the job stream instance name. Since Tivoli
    Workload Scheduler for z/OS can have several job stream instances with
    the same name in the plan, it is necessary with an unique and invariant
    identifier (the occurrence token) for the occurrence or job stream instance
    name in the Symphony file.

   The job name is made up according to one of the following formats (see
   Figure 2-22 on page 82 for an example):
   – <T>_<opnum>_<applname>
     when the job is created in the Symphony file
   – <T>_<opnum>_<ext>_<applname>
     when the job is first deleted from the current plan and then recreated in the
     current plan


                                      Chapter 2. End-to-end scheduling architecture   81
In these examples:
                     – <T> is J for normal jobs (operations), P for jobs that are representing
                       pending predecessors, or R for recovery jobs (jobs added by Tivoli
                       Workload Scheduler recovery).
                     – <opnum> is the operation number for the job in the job stream (in current
                       plan).
                     – <ext> is a sequential number that is incremented every time the same
                       operation is deleted and then recreated in current plan; if 0, it is omitted.
                     – <applname> is the name of the occurrence (job stream) the operation
                       belongs to.




                     Job name and workstation for              Job Stream name and workstation for
                     distributed job in Symphony file          job stream in Symphony file

                 Figure 2-22 Job name and job stream name as generated in the Symphony file

                 Tivoli Workload Scheduler for z/OS uses the job name and an operation number
                 as "key" for the job in a job stream.

                 In the Symphony file it is only the job name that is used as key. Since Tivoli
                 Workload Scheduler for z/OS can have the same job name several times in on
                 job stream and distinguishes between identical job names with the operation
                 number, the job names generated in the Symphony file contains the Tivoli
                 Workload Scheduler for z/OS operation number as part of the job name.

                 The name of a job stream (application) can contain national characters such as
                 dollar ($), sect (§), and pound (£). These characters are converted into dashes (-)
                 in the names of included jobs when the job stream is added to the symphony file
                 or when the Symphony file is created. For example, consider the job stream
                 name:
                     APPL$$234§§ABC£

                 In the Symphony file, the names of the jobs in this job stream will be:
                     <T>_<opnum>_APPL--234--ABC-

                 This nomenclature is still valid because the job stream instance (occurrence) is
                 identified by the occurrence token, and the operations are each identified by the



82   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
operation numbers (<opnum>) that are part of the job names in the Symphony
file.

 Note: The criteria that are used to generate job names in the Symphony file
 can be managed by the Tivoli Workload Scheduler for z/OS JTOPTS
 TWSJOBNAME() parameter, which was introduced with APAR PQ77970. It is
 possible, for example, to use the job name (from the operation) instead of the
 job stream name for the job name in the Symphony file, so the job name will
 be <T>_<opnum>_<jobname> in the Symphony file.

In normal situations, the Symphony file is automatically generated as part of the
Tivoli Workload Scheduler for z/OS plan process. The topology definitions are
read and built into the Symphony file as part of the Tivoli Workload Scheduler for
z/OS plan programs, so regular operation situations can occur where you need to
renew (or rebuild) the Symphony file from the Tivoli Workload Scheduler for z/OS
plan:
   When you make changes to the script library or to the definitions of the
   TOPOLOGY statement
   When you add or change information in the plan, such as workstation
   definitions

To have the Symphony file rebuilt or renewed, you can use the Symphony Renew
option of the Daily Planning menu (option 3.5 in the legacy IBM Tivoli Workload
Scheduler for z/OS ISPF panels).

This renew function can also be used to recover from error situations such as:
   A non-valid job definition in the script library
   Incorrect workstation definitions
   An incorrect Windows user name or password
   Changes to the script library or to the definitions of the TOPOLOGY
   statement

In 5.8.5, “Common errors for jobs on fault-tolerant workstations” on page 334, we
describe how to correct several of these error situations without redistributing the
Symphony file. It is worth it to get familiar with these alternatives before you start
redistributing a Symphony file in a heavily loaded production environment.




                                        Chapter 2. End-to-end scheduling architecture   83
2.3.5 Making the end-to-end scheduling system fault tolerant
                 In the following, we cover some possible cases of failure in end-to-end
                 scheduling and ways to mitigate against these failures:
                 1. The Tivoli Workload Scheduler for z/OS engine (controller) can fail due to a
                    system or task outage.
                 2. The Tivoli Workload Scheduler for z/OS server can fail due to a system or
                    task outage.
                 3. The domain managers at the first level, that is the domain managers directly
                    connected to the Tivoli Workload Scheduler for z/OS server, can fail due to a
                    system or task outage.

                 To avoid an outage of the end-to-end workload managed in the Tivoli Workload
                 Scheduler for z/OS engine and server and in the Tivoli Workload Scheduler
                 domain manager, you should consider:
                     Using a standby engine (controller) for the Tivoli Workload Scheduler for z/OS
                     engine (controller).
                     Making sure that your Tivoli Workload Scheduler for z/OS server can be
                     reached if the Tivoli Workload Scheduler for z/OS engine (controller) is moved
                     to one of its standby engines (TCP/IP configuration in your enterprise).
                     Remember that the end-to-end server started task always must be active on
                     the same z/OS system as the active engine (controller).
                     Defining backup domain managers for your Tivoli Workload Scheduler
                     domain managers at the first level.

                      Note: It is a good practice to define backup domain managers for all
                      domain managers in the distributed Tivoli Workload Scheduler network.

                 Figure 2-23 shows an example of a fault-tolerant end-to-end network with a Tivoli
                 Workload Scheduler for z/OS standby controller engine and one Tivoli Workload
                 Scheduler backup domain manager for one Tivoli Workload Scheduler domain
                 manager at the first level.




84   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
MASTERDM

                 Standby                                      Standby
                 Engine                                       Engine

                                             z/OS
                                           SYSPLEX

                            Active
                            Engine

                            Server




 DomainZ
                              Domain             AIX          AIX          Backup
                              Manager                                      Domain
                               DMZ                                         Manager
                                                                            (FTA)



 DomainA                                                                DomainB
                            AIX                                         HPUX
          Domain                                         Domain
          Manager                                        Manager
           DMA                                            DMB




   FTA1                    FTA2                  FTA3                   FTA4

           AIX                    OS/400               Windows 2000            Solaris


Figure 2-23 Redundant configuration with standby engine and IBM Tivoli Workload
Scheduler backup DM

If the domain manager for DomainZ fails, it will be possible to switch to the
backup domain manager. The backup domain manager has an updated
Symphony file and knows the subordinate domain managers and fault-tolerant
agents, so it can take over the responsibilities of the domain manager. This
switch can be performed without any outages in the workload management.

If the switch to the backup domain manager is going to be active across the Tivoli
Workload Scheduler for z/OS plan extension, you must change the topology
definitions in the Tivoli Workload Scheduler for z/OS DOMREC initialization
statements. The backup domain manager fault tolerant workstation is going to be
the domain manager at the first level for the Tivoli Workload Scheduler
distributed network, even after the plan extension.

Example 2-1 shows how to change the name of the fault tolerant workstation in
the DOMREC initialization statement, if the switch to the backup domain
manager is effective across the Tivoli Workload Scheduler for z/OS plan
extension. (See 5.5.4, “Switch to Tivoli Workload Scheduler backup domain
manager” on page 308 for more information.)



                                                           Chapter 2. End-to-end scheduling architecture   85
Example 2-1 DOMREC initialization statement
                 DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMZ) DOMPARENT(MASTERDM)

                 Should be changed to:

                 DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMB) DOMPARENT(MASTERDM)

                 Where FDMB is the name of the fault tolerant workstation where the backup
                 domain manager is running.


                 If the Tivoli Workload Scheduler for z/OS engine or server fails, it will be possible
                 to let one of the standby engines in the same sysplex take over. This takeover
                 can be accomplished without any outages in the workload management.

                 The Tivoli Workload Scheduler for z/OS server must follow the Tivoli Workload
                 Scheduler for z/OS engine. That is, if the Tivoli Workload Scheduler for z/OS
                 engine is moved to another system in the sysplex, the Tivoli Workload Scheduler
                 for z/OS server must be moved to the same system in the sysplex.

                  Note: The synchronization between the Symphony file on the Tivoli Workload
                  Scheduler domain manager and the Symphony file on its backup domain
                  manager has improved considerably with FixPack 04 for IBM Tivoli Workload
                  Scheduler, in which an enhanced and improved fault tolerant switch manager
                  functionality is introduced.


2.3.6 Benefits of end-to-end scheduling
                 The benefits that can be gained from using the Tivoli Workload Scheduler for
                 z/OS end-to-end scheduling include:
                     The ability to connect Tivoli Workload Scheduler fault-tolerant agents to an
                     Tivoli Workload Scheduler for z/OS controller.
                     Scheduling on additional operating systems.
                     The ability to define resource dependencies between jobs that run on different
                     FTAs or in different domains.
                     Synchronizing work in mainframe and distributed environments.
                     The ability to organize the scheduling network into multiple tiers, delegating
                     some responsibilities to Tivoli Workload Scheduler domain managers.
                     Extended planning capabilities, such as the use of long-term plans, trial
                     plans, and extended plans, also for the Tivoli Workload Scheduler network.
                     Extended plans also means that the current plan can span more than 24
                     hours. One possible benefit is being able to extend a current plan over a time


86   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
period when no one will be available to verify that the current plan was
successfully created each day, such as over a holiday weekend. The
end-to-end environment also allows the current plan to be extended for a
specified length of time, or to replan the current plan to remove completed
jobs.
Powerful run-cycle and calendar functions. Tivoli Workload Scheduler
end-to-end enables more complex run cycles and rules to be defined to
determine when a job stream should be scheduled.
Ability to create a Trial Plan that can span more than 24 hours.
Improved use of resources (keep resource if job ends in error).
Enhanced use of host names instead of dotted IP addresses.
Multiple job or job stream instances in the same plan. In the end-to-end
environment, job streams are renamed using a unique identifier so that
multiple job stream instances can be included in the current plan.
The ability to use batch tools (for example, Batchloader, Massupdate, OCL,
BCIT) that enable batched changes to be made to the Tivoli Workload
Scheduler end-to-end database and plan.
The ability to specify at the job level whether the job’s script should be
centralized (placed in Tivoli Workload Scheduler for z/OS JOBLIB) or
non-centralized (placed locally on the Tivoli Workload Scheduler agent).
Use of Tivoli Workload Scheduler for z/OS JCL variables in both centralized
and non-centralized scripts.
The ability to use Tivoli Workload Scheduler for z/OS recovery in centralized
scripts or Tivoli Workload Scheduler recovery in non-centralized scripts.
The ability to define and browse operator instructions associated with jobs in
the database and plan. In a Tivoli Workload Scheduler distributed
environment, it is possible to insert comments or a description in a job
definition, but these comments and description are not visible from the plan
functions.
The ability to define a job stream that will be submitted automatically to Tivoli
Workload Scheduler when one of the following events occurs in the z/OS
system: a particular job is executed or terminated in the z/OS system, a
specified resource becomes available, or a z/OS dataset is created or
opened.




                                    Chapter 2. End-to-end scheduling architecture   87
Considerations
                 Implementing Tivoli Workload Scheduler for z/OS end-to-end also imposes some
                 limitations:
                     Windows users’ passwords are defined directly (without any encryption) in the
                     Tivoli Workload Scheduler for z/OS server initialization parameters. It is
                     possible to place these definitions in a separate library with restricted access
                     (restricted by RACF, for example) to authorized persons.
                     In an end-to-end configuration, some of the conman command options are
                     disabled. On an end-to-end FTA, the conman command only allows display
                     operations and the subset of commands (such as kill, altpass, link/unlink,
                     start/stop, switchmgr) that do not affect the status or sequence of jobs.
                     Command options that could affect the information that is contained in the
                     Symphony file are not allowed. For a complete list of allowed conman
                     commands, refer to 2.7, “conman commands in the end-to-end environment”
                     on page 106.
                     Workstation classes are not supported in an end-to-end configuration.
                     The LIMIT attribute is supported on the workstation level, not on the job
                     stream level in an end-to-end environment.
                     Some Tivoli Workload Scheduler functions are not available directly on Tivoli
                     Workload Scheduler FTAs, but can be handled by other functions in Tivoli
                     Workload Scheduler for z/OS.
                     For example:
                     – IBM Tivoli Workload Scheduler prompts
                         •   Recovery prompts are supported.
                         •   The Tivoli Workload Scheduler predefined and ad hoc prompts can be
                             replaced with the manual workstation function in Tivoli Workload
                             Scheduler for z/OS.
                     – IBM Tivoli Workload Scheduler file dependencies
                         •   It is not possible to define file dependencies directly at job level in Tivoli
                             Workload Scheduler for z/OS for distributed Tivoli Workload Scheduler
                             jobs.
                         •   The filewatch program that is delivered with Tivoli Workload Scheduler
                             can be used to create file dependencies for distributed jobs in Tivoli
                             Workload Scheduler for z/OS. Using the filewatch program, the file
                             dependency is “replaced” by a job dependency in which a predecessor
                             job checks for the file using the filewatch program.




88   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
– Dependencies on job stream level
              The traditional way to handle these types of dependencies in Tivoli
              Workload Scheduler for z/OS is to define a “dummy start” and “dummy
              end” job at the beginning and end of the job streams, respectively.
           – Repeat range (that is, rerun this job every 10 minutes”)
              Although there is no built-in function for this in Tivoli Workload Scheduler
              for z/OS, it can be accomplished in different ways, such as by defining the
              job repeatedly in the job stream with specific start times or by using a PIF
              (Tivoli Workload Scheduler for z/OS Programming Interface) program to
              rerun the job every 10 minutes.
           – Job priority change
              Job priority cannot be changed directly for an individual fault-tolerant job.
              In an end-to-end configuration, it is possible to change the priority of a job
              stream. When the priority of a job stream is changed, all jobs within the job
              stream will have the same priority.
           – Internetwork dependencies
              An end-to-end configuration supports dependencies on a job that is
              running in the same Tivoli Workload Scheduler end-to-end or distributed
              topology (network).



2.4 Job Scheduling Console and related components
        The Job Scheduling Console (JSC) provides another way of working with Tivoli
        Workload Scheduler for z/OS databases and current plan. The JSC is a graphical
        user interface that connects to the Tivoli Workload Scheduler for z/OS engine via
        a Tivoli Workload Scheduler for z/OS TCP/IP server task. Usually this task is
        dedicated exclusively to handling JSC communications. Later in this book, the
        server task that is dedicated to JSC communications will be referred to as the
        JSC server (Figure 2-24 on page 90).

        The TCP/IP server is a separate address space, started and stopped either
        automatically by the engine or by the user via the z/OS start and stop
        commands. More than one TCP/IP server can be associated with an engine.




                                               Chapter 2. End-to-end scheduling architecture   89
TWS for z/OS Engine
                                                                   Databases
                                                      Master
                                                                                     JSC Server
                                                     Domain        Current Plan
                                                     Manager




                                                           TMR                            OPC
                                                          Server                      Connector
                                                                                  Tivoli Management
                                                                                      Framework




                                                Job                   Job                   Job
                                             Scheduling            Scheduling            Scheduling
                                              Console               Console               Console



                 Figure 2-24 Communication between JSC and ITWS for z/OS via the JSC Server

                 The Job Scheduling Console can be run on almost any platform. Using the JSC,
                 an operator can access both Tivoli Workload Scheduler and Tivoli Workload
                 Scheduler for z/OS scheduling engines. In order to communicate with the
                 scheduling engines, the JSC requires several additional components to be
                 installed:
                     Tivoli Management Framework
                     Job Scheduling Services (JSS)
                     Tivoli Workload Scheduler connector, Tivoli Workload Scheduler for z/OS
                     connector, or both

                 The Job Scheduling Services and the connectors must be installed on top of the
                 Tivoli Management Framework. Together, the Tivoli Management Framework,
                 the Job Scheduling Services, and the connector provide the interface between
                 JSC and the scheduling engine.

                 The Job Scheduling Console is installed locally on your desktop computer, laptop
                 computer, or workstation.


2.4.1 A brief introduction to the Tivoli Management Framework
                 Tivoli Management Framework provides the foundation on which the Job
                 Scheduling Services and connectors are installed. It also performs access
                 verification when a Job Scheduling Console user logs in. The Tivoli Management
                 Environment (TME®) uses the concept of Tivoli Management Regions (TMRs).
                 There is a single server for each TMR, called the TMR server; this is analogous



90   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
to the IBM Tivoli Workload Scheduler master server. The TMR server contains
           the Tivoli object repository (a database used by the TMR). Managed nodes are
           semi-independent agents that are installed on other nodes in the network; these
           are roughly analogous to Tivoli Workload Scheduler fault-tolerant agents. For
           more information about the Tivoli Management Framework, see the IBM Tivoli
           Management Framework 4.1 User’s Guide, GC32-0805.


2.4.2 Job Scheduling Services (JSS)
           The Job Scheduling Services component provides a unified interface in the Tivoli
           Management Framework for different job scheduling engines. Job Scheduling
           Services does not do anything on its own; it requires additional components
           called connectors in order to connect to job scheduling engines. It must be
           installed on either the TMR server or a managed node.


2.4.3 Connectors
           Connectors are the components that enable the Job Scheduling Services to talk
           with different types of scheduling engines. When working with a particular type of
           scheduling engine, the Job Scheduling Console communicates with the
           scheduling engine via the Job Scheduling Services and the connector. A different
           connector is required for each type of scheduling engine. A connector can only
           be installed on a computer where the Tivoli Management Framework and Job
           Scheduling Services have already been installed.

           There are two types of connectors for connecting to the two types of scheduling
           engines in the IBM Tivoli Workload Scheduler 8.2 suite:
              IBM Tivoli Workload Scheduler for z/OS connector (or OPC connector)
              IBM Tivoli Workload Scheduler connector

           Job Scheduling Services communicates with the engine via the connector of the
           appropriate type. When working with a Tivoli Workload Scheduler for z/OS
           engine, the JSC communicates via the Tivoli Workload Scheduler for z/OS
           connector. When working with a Tivoli Workload Scheduler engine, the JSC
           communicates via the Tivoli Workload Scheduler connector.

           The two types of connectors function somewhat differently: The Tivoli Workload
           Scheduler for z/OS connector communicates over TCP/IP with the Tivoli
           Workload Scheduler for z/OS engine running on a mainframe (MVS or z/OS)
           computer. The Tivoli Workload Scheduler connector performs direct reads and
           writes of the Tivoli Workload Scheduler plan and database files on the same
           computer as where the Tivoli Workload Scheduler connector runs.




                                                 Chapter 2. End-to-end scheduling architecture   91
A connector instance must be created before the connector can be used. Each
                 type of connector can have multiple instances. A separate instance is required for
                 each engine that will be controlled by JSC.

                 We will now discuss each type of connector in more detail.

                 Tivoli Workload Scheduler for z/OS connector
                 Also sometimes called the OPC connector, the Tivoli Workload Scheduler for
                 z/OS connector can be instantiated on any TMR server or managed node. The
                 Tivoli Workload Scheduler for z/OS connector instance communicates via TCP
                 with the Tivoli Workload Scheduler for z/OS TCP/IP server. You might, for
                 example, have two different Tivoli Workload Scheduler for z/OS engines that both
                 must be accessible from the Job Scheduling Console. In this case, you would
                 install one connector instance for working with one Tivoli Workload Scheduler for
                 z/OS engine, and another connector instance for communicating with the other
                 engine. When a Tivoli Workload Scheduler for z/OS connector instance is
                 created, the IP address (or host name) and TCP port number of the Tivoli
                 Workload Scheduler for z/OS engine’s TCP/IP server are specified. The Tivoli
                 Workload Scheduler for z/OS connector uses these two pieces of information to
                 connect to the Tivoli Workload Scheduler for z/OS engine. See Figure 2-25 on
                 page 93.

                 Tivoli Workload Scheduler connector
                 The Tivoli Workload Scheduler connector must be instantiated on the host where
                 the Tivoli Workload Scheduler engine is installed so that it can access the plan
                 and database files locally. This means that the Tivoli Management Framework
                 must be installed (either as a TMR server or managed node) on the server where
                 the Tivoli Workload Scheduler engine resides. Usually, this server is the Tivoli
                 Workload Scheduler master domain manager. But it may also be desirable to
                 connect with JSC to another domain manager or to a fault-tolerant agent. If
                 multiple instances of Tivoli Workload Scheduler are installed on a server, it is
                 possible to have one Tivoli Workload Scheduler connector instance for each
                 Tivoli Workload Scheduler instance on the server. When a Tivoli Workload
                 Scheduler connector instance is created, the full path to the Tivoli Workload
                 Scheduler home directory associated with that Tivoli Workload Scheduler
                 instance is specified. This is how the Tivoli Workload Scheduler connector knows
                 where to find the Tivoli Workload Scheduler databases and plan. See
                 Figure 2-25 on page 93.

                 Connector instances
                 We now give some examples of how connector instances might be installed in
                 the real world.




92   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
One connector instance of each type
In Figure 2-25, there are two connector instances, including one Tivoli Workload
Scheduler for z/OS connector instance and one Tivoli Workload Scheduler
connector instance :
   The Tivoli Workload Scheduler for z/OS connector instance is associated with
   a Tivoli Workload Scheduler for z/OS engine running in a remote sysplex.
   Communication between the connector instance and the remote scheduling
   engine is conducted over a TCP connection.
   The Tivoli Workload Scheduler connector instance is associated with a Tivoli
   Workload Scheduler engine installed on the same AIX server. The Tivoli
   Workload Scheduler connector instance reads from and writes to the plan
   (the Symphony file) of the Tivoli Workload Scheduler engine.



          MASTERDM                        TWS for z/OS
                           z/OS

                       Master         Databases
                                                                JSC Server
                      Domain              Current
                      Manager              Plan


          DomainA          AIX
                                             TWS

                       Domain                              TWS       OPC
                                     Symphony            Connector Connector
                       Manager
                        DMB                                  Framework




                          Other DMs and
                              FTAs




                                                                       Job
                                                                    Scheduling
                                                                     Console

Figure 2-25 One ITWS for z/OS connector and one ITWS connector instance




                                      Chapter 2. End-to-end scheduling architecture   93
Tip: Tivoli Workload Scheduler connector instances must be created on the
                  server where the Tivoli Workload Scheduler engine is installed. This is
                  because the connector must be able to have access locally to the Tivoli
                  Workload Scheduler engine (specifically, to the plan and database files). This
                  limitation obviously does not apply to Tivoli Workload Scheduler for z/OS
                  connector instances because the Tivoli Workload Scheduler for z/OS
                  connector communicates with the remote Tivoli Workload Scheduler for z/OS
                  engine over TCP/IP.

                 In this example, the connectors are installed on the domain manager DMB. This
                 domain manager has one connector instance of each type:
                     A Tivoli Workload Scheduler connector to monitor the plan file (Symphony)
                     locally on DMB
                     A Tivoli Workload Scheduler for z/OS (OPC) connector to work with the
                     databases and current plan on the mainframe

                 Having the Tivoli Workload Scheduler connector installed on a DM provides the
                 operator with the ability to use JSC to look directly at the Symphony file on that
                 workstation. This is particularly useful in the event that problems arise during the
                 production day. If any discrepancy appears between the state of a job in the Tivoli
                 Workload Scheduler for z/OS current plan and the Symphony file on an FTA, it is
                 useful to be able to look at the Symphony file directly. Another benefit is that
                 retrieval of job logs from an FTA is much faster when the job log is retrieved
                 through the Tivoli Workload Scheduler connector. If the job log is fetched through
                 the Tivoli Workload Scheduler for z/OS engine, it can take much longer.

                 Connectors on multiple domain managers
                 With the previous version of IBM Tivoli Workload Scheduler — Version 8.1 — it
                 was necessary to have a single primary domain manager that was the parent of
                 all other domain managers. Figure 2-25 on page 93 shows an example of such
                 an arrangement. Tivoli Workload Scheduler 8.2 removes this limitation. With
                 Version 8.2, it is possible to have more than one domain manager directly under
                 the master domain manager. Most end-to-end scheduling networks will have
                 more than one domain manager under the master. For this reason, it is a good
                 idea to install the Tivoli Workload Scheduler connector and OPC connector on
                 more than one domain manager.




94   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
MASTERDM                                                  TWS for z/OS
                                           z/OS

                                     Master                 Databases
                                                                                    JSC Server
                                    Domain                   Current
                                    Manager                   Plan


    DomainA                                                                                       DomainB

          AIX            TWS                                        AIX            TWS
      Domain                      TWS       OPC               Domain                          TWS       OPC
      Manager                   Connector Connector           Manager                       Connector Connector
       DMA           Symphony       Framework                  DMA             Symphony          Framework




         Other DMs and                                             Other DMs and
             FTAs                                                      FTAs




                                                         Job
                                                      Scheduling
                                                       Console


Figure 2-26 An example with two connector instances of each type


 Note: It is a good idea to set up more than one Tivoli Workload Scheduler for
 z/OS connector instance associated with the engine (as in Figure 2-26). This
 way, if there is a problem with one of the workstations running the connector,
 JSC users will still be able to access the Tivoli Workload Scheduler for z/OS
 engine via the other connector. If JSC access is important to your enterprise, it
 is vital to set up redundant connector instances like this.

Next, we discuss the connectors in more detail.

The connector programs
These are the programs that run behind the scenes to make the connectors work.
Each program and its function is described below.

Programs of the IBM Tivoli Workload Scheduler for z/OS connector
The programs that comprise the Tivoli Workload Scheduler for z/OS connector
are located in $BINDIR/OPC (Figure 2-27 on page 96).




                                                  Chapter 2. End-to-end scheduling architecture                   95
TWS for z/OS (OPC)
                                                  TWS for z/OS
                              z/OS
                                                Databases
                                                                            JSC Server
                                                Current Plan




                    TMR Server or Managed Node
                    with JSS
                               AIX                 opc_connector           opc_connector2




                                                                   oserv


                                                                                                Job
                                                                                             Scheduling
                                                                                              Console


                 Figure 2-27 Programs of the IBM Tivoli Workload Scheduler for z/OS (OPC) connector

                     opc_connector
                     The main connector program that contains the implementation of the main
                     connector methods (basically all the methods that are required to connect to
                     and retrieve data from Tivoli Workload Scheduler for z/OS engine). It is
                     implemented as a threaded daemon, which means that it is automatically
                     started by the Tivoli Framework at the first request that should be handled by
                     it, and it will stay active until there has not been a request for a long time. After
                     it is started, it handles starting new threads for all JSC requests that require
                     data from a specific Tivoli Workload Scheduler for z/OS engine.
                     opc_connector2
                     A small connector program that contains the implementation for small
                     methods that do not require data from Tivoli Workload Scheduler for z/OS.
                     This program is implemented per method, which means that Tivoli Framework
                     starts this program when a method implemented by it is called, the process
                     performs the action for this method, and then is terminated. This is useful for
                     methods (like the ones called by JSC when it starts and asks for information
                     from all of the connectors) that can be isolated and not logical to maintain the
                     process activity.




96   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Programs of the IBM Tivoli Workload Scheduler connector
The programs that comprise the Tivoli Workload Scheduler connector are
located in $BINDIR/Maestro (Figure 2-28).


 TWS
                TWS                          TWS Connector                  TMF

         Databases                           maestro_database


         Symphony                              maestro_plan

                            start & stop
             netman                           maestro_engine                oserv
                              events,

                           SAP pick lists:
             r3batch         jobs, tasks,
                                             maestro_x_server
                            variants, etc.
                                              joblog_retriever




         remote SAP host                      remote scribner

                                                                                         Job
                                                                                      Scheduling
                                                                                       Console

Figure 2-28 Programs of the IBM Tivoli Workload Scheduler connector

   maestro_engine
   The maestro_engine program performs authentication when a user logs in via
   the Job Scheduling Console. It also starts and stops the Tivoli Workload
   Scheduler engine. It is started by the Tivoli Management Framework
   (specifically, the oserv program) when a user logs in from JSC. It terminates
   after 30 minutes of inactivity.

    Note: oserv is the Tivoli service that is used as the object request broker
    (ORB). This service runs on the Tivoli management region server and
    each managed node.

   maestro_plan
   The maestro_plan program reads from and writes to the Tivoli Workload
   Scheduler plan. It also handles switching to a different plan. The program is
   started when a user accesses the plan. It terminates after 30 minutes of
   inactivity.



                                             Chapter 2. End-to-end scheduling architecture   97
maestro_database
                     The maestro_database program reads from and writes to the Tivoli Workload
                     Scheduler database files. It is started when a JSC user accesses a database
                     object or creates a new object definition. It terminates after 30 minutes of
                     inactivity.
                     job_instance_output
                     The job_instance_ouput program retrieves job standard list files. It is started
                     when a JSC user runs the Browse Job Log operation. It starts up, retrieves
                     the requested stdlist file, and then terminates.
                     maestro_x_server
                     The maestro_x_server program is used to provide an interface to certain
                     types of extended agents, such as the SAP R/3 extended agent (r3batch). It
                     starts up when a command is run in JSC that requires execution of an agent
                     method. It runs the X-agent method, returns the output, and then terminates.
                     It only runs on workstations that host an r3batch extended agent.



2.5 Job log retrieval in an end-to-end environment
                 In this section, we cover the detailed steps of job log retrieval in an end-to-end
                 environment using the JSC. There are different steps involved depending on
                 which connector you are using to retrieve the job log and whether the firewalls
                 are involved. We cover all of these scenarios: using the Tivoli Workload
                 Scheduler (distributed) connector (via the domain manager or first-level domain
                 manager), using the Tivoli Workload Scheduler for z/OS (or OPC) connector, and
                 with the firewalls in the picture.


2.5.1 Job log retrieval via the Tivoli Workload Scheduler connector
                 As shown in Figure 2-29 on page 99, the steps behind the scenes in an
                 end-to-end scheduling network when retrieving the job log via the domain
                 manager (using the Tivoli Workload Scheduler (distributed) connector) are:
                 1. Operator requests joblog in Job Scheduling Console.
                 2. JSC connects to oserv running on the domain manager.
                 3. oserv spawns job_instance_output to fetch the job log.
                 4. job_instance_output communicates over TCP directly with the workstation
                    where the joblog exists, bypassing the domain manager.
                 5. netman on that workstation spawns scribner and hands over the TCP
                    connection with job_instance_output to the new scribner process.
                 6. scribner retrieves the joblog.


98   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
7. scribner sends the joblog to job_instance_output on the master.
                     8. job_instance_ouput relays the job log to oserv.
                     9. oserv sends the job log to JSC.


      MASTERDM                    z/OS

                         Master
                        Domain
                        Manager



      DomainZ                     AIX
                        Domain                         oserv
                        Manager
                                                   8           3             9
                         DMZ                                                                               2          Job
                                                job_instance_output                                                Scheduling
                                                                                                                    Console
                                                               4

      DomainA                                                                                        DomainB
                                                               HPUX
           Domain          AIX              Domain
           Manager                          Manager
            DMA                              DMB




                                                                              7

                                                                                            netman

        FTA1           FTA2              FTA3             FTA4                                  5
                                                                                 scribner
               AIX         OS/400           Windows XP             Solaris

                                                                             013780.0559    6


Figure 2-29 Job log retrieval in an end-to-end scheduling network via the domain manager


2.5.2 Job log retrieval via the OPC connector
                     As shown in Figure 2-30 on page 101, the following steps take place behind the
                     scenes in an end-to-end scheduling network when retrieving the job log using the
                     OPC connector.

                     The initial request for joblog is done:
                     1. Operator requests joblog in Job Scheduling Console.
                     2. JSC connects to oserv running on the domain manager.
                     3. oserv tells the OPC connector program to request the joblog from the OPC
                        system.



                                                                             Chapter 2. End-to-end scheduling architecture      99
4. opc_connector relays the request to the JSC Server task on the mainframe.
                5. The JSC Server requests the job log from the controller.

                The next step depends on whether the job log has already been retrieved. If the
                job log has already been retrieved, skip to step 17. If the job log has not been
                retrieved yet, continue with step 6.

                Assuming that the log has not been retrieved already:
                6. The controller sends the request for the joblog to the sender subtask.
                7. The controller sends a message to the operator indicating that the job log has
                   been requested. This message is displayed in a dialog box in JSC. (The
                   message is sent via this path: Controller → JSC Server → opc_connector →
                   oserv → JSC).
                8. The sender subtask sends the request to the output translator, via the output
                   queue.
                9. The output translator thread reads the request and spawns a job log retriever
                   thread to handle it.
                10.The job log retriever thread opens a TCP connection directly to the
                   workstation where the job log exists, bypassing the domain manager.
                11.netman on that workstation spawns scribner and hands over the TCP
                   connection with the job log retriever to the new scribner process.
                12.scribner retrieves the job log.
                13.scribner sends the joblog to the job log retriever thread.
                14.The job log retriever thread passes the job log to the input writer thread
                15.The input writer thread sends the job log to the receiver subtask, via the input
                   queue
                16.The receiver subtask sends the job log to the controller

                When the operator requests the job log a second time, the first five steps are the
                same as in the initial request (above). This time around, because the job log has
                already been received by the controller;
                17.The controller sends the job log to the JSC Server.
                18.The JSC Server sends the information to the OPC connector program
                   running on the domain manager.
                19.The IBM Tivoli Workload Scheduler for z/OS connector relays the job log to
                   oserv.
                20.oserv relays the job log to JSC and JSC displays the job log in a new window.




100   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
8
  MASTERDM                  z/OS      OPC Controller
                                                                   6    sender subtask         out        output translator
                   Master            17                5           16                                                         9
                  Domain                  JSC Server                   receiver subtask        in             input writer
                  Manager
                                                                                               15              14
                                                                                                          job log retriever



  DMZ                       AIX        18                  4                                                             10
                  Domain                  opc_connector
                  Manager
                   DMZ                 19                      3
                                              oserv




  DMA                                                                                                                 DMB
                                                                   HPUX
        Domain       AIX              Domain
        Manager                       Manager
         DMA                           DMB

                                                                                                                              2                 Job
                                                                                                                                             Scheduling
                                                                                                                              20              Console                    1
                                                                                    13

                                                                                                 netman                            Cannot load the Job output.
                                                                                                                                   Reason: EQQMA41I The engine has requested
                                                                                                                                   to the remote agent the joblog info needed to
    FTA1          FTA2             FTA3                    FTA4                                          11                        process the command. Please, retry later.
                                                                                    scribner                                       ->EQQM637I A JOBLOG IS NEEDED TO
                                                                                                                                   PROCESS THE COMMAND. IT HAS BEEN
                                                                                                                                   REQUESTED.
           AIX       OS/400           Windows XP                        Solaris

                                                                                  013780.0559       12
                                                                                                                                                                             7

Figure 2-30 Job log retrieval in an end-to-end network via the ITWS for z/OS- no FIREWALL=Y configured


2.5.3 Job log retrieval when firewalls are involved
                   When the firewalls are involved (that is, FIREWALL=Y configured in the
                   CPUREC definition of the workstation in which the job log is retrieved), the steps
                   for retrieving the job log in an end-to-end scheduling network are different. These
                   steps are shown in Figure 2-31 on page 102. Note that the firewall is configured
                   to allow only the following traffic: DMY → DMA and DMZ → DMB.
                   1. Operator requests job log in JSC, or the mainframe ISPF panels.
                   2. TCP connection is opened to the parent domain manager of the with the
                      workstation where the job log exists.
                   3. netman on that workstation spawns router and hands over the TCP socket to
                      the new router process.




                                                                                      Chapter 2. End-to-end scheduling architecture                                           101
4. router opens a TCP connection to netman on the parent domain manager of
                             the workstation where the job log exists, because this DM is also behind the
                             firewall.
                          5. netman on the DM spawns router and hands over the TCP socket with router
                             to the new router process.
                          6. router opens a TCP connection to netman on the workstation where the job
                             log exists.
                          7. netman on that workstation spawns scribner and hands over the TCP socket
                             with router to the new scribner process.
                          8. scribner retrieves the job log.
                          9. scribner on FTA4 sends the job log to router on DMB.
                          10.router sends the job log to the router program running on DMZ.


                                       Domain                                                         1
                                                                         Job log is requested
                                     Manager or
                                     z/OS Master


      DomainY                                                                                    2            DomainZ
                    AIX                                           AIX
                                                                            11
                                                                                           netman
        Domain                                     Domain                                        3
        Manager                                    Manager                   router
         DMY                                        DMZ
                                                                                       4
                                                                                                                        Firewall
      DomainA                                                                    10                           DomainB
                                                                    HPUX
          Domain               AIX                  Domain                                 netman
          Manager                                   Manager                                      5
           DMA                                       DMB                     router

                                             FIREWALL(Y)
                                                                                            6
                                                                                      9


                                                                                                     netman
       FTA1                 FTA2               FTA3               FTA4                                    7
                                                                                      scribner
              AIX              OS/400                Windows XP          Solaris
                                                                   FIREWALL(Y)
                                                                                   013780.0559       8


Figure 2-31 Job log retrieval in an end-to-end network via the ITWS for z/OS- with FIREWALL=Y configured

                          It is important to note that in the previous scenario, you should not configure the
                          domain manager DMB as FIREWALL=N in its CPUREC definition. If you do, you


102    End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
will not be able to retrieve the job log from FTA4, even though FTA4 is configured
                          as FIREWALL=Y. This is shown is Figure 2-32.

                          In this case, when the TCP connection to the parent domain manager of the
                          workstation where the job log exists (DMB) is blocked by the firewall, the
                          connection request is not received by netman on DMB. The firewall does not
                          allow direct connections from DMZ to FTA4. The only connections from DMZ that
                          are permitted are those that go to DMB. Because DMB has FIREWALL=N, the
                          connection did not go through DMZ – it tried to go straight to FTA4.


                                      Domain                                        1
                                                             Job log is requested
                                    Manager or
                                    z/OS master


    DomainY                                                                                      DomainZ
                    AIX                                          AIX
        Domain                                    Domain
        Manager                                   Manager
         DMY                                       DMZ

                                                                                    2
                                                                                                                  Firewall
    DomainA                                                                                      DomainB
                                                                   HPUX
          Domain              AIX                  Domain                           netman
          Manager                                  Manager
           DMA                                      DMB
                                              FIREWALL=N




       FTA1                FTA2               FTA3               FTA4

              AIX              OS/400               Windows XP        Solaris
                                                                   FIREWALL=Y




Figure 2-32 Wrong configuration: connection blocked



2.6 Tivoli Workload Scheduler, important files, and
directory structure
                          Figure 2-33 on page 104 shows the most important files in the Tivoli Workload
                          Scheduler 8.2 working directory in USS (WRKDIR).


                                                                           Chapter 2. End-to-end scheduling architecture   103
Symbol Legend                                                             Color Legend
      options & config. files      databases   event queues                      Only found on E2E Server in HFS on mainframe
                                                              WRKDIR             (not found on Unix or Windows workstations)
                                   plans       logs




              localopts                                                                           TWSCCLog.propterties




  SymX Symbad Symold Symnew Sinfonia Symphony                      Mailbox.msg Intercom.msg




   audit       mozart           network                       version    pobox     Translator.wjl Translator.chk stdlist



       globalopts                NetConf                                ServerN.msg
                                                                                                                            logs
       mastsked                  NetReq.msg                             FTA.msg
       jobs
                                                                        tomaster.msg

                                                                                            YYYYMMDD_NETMAN.log

                                                                                            YYYYMMDD_TWSMERGE.log

                                                                                            YYYYMMDD_E2EMERGE.log

Figure 2-33 The most important files in the Tivoli Workload Scheduler 8.2 working directory in USS

                          The descriptions of the files are:
                          SymX                                   (where X is the name of the user that ran the CP
                                                                 extend or Symphony renew job): A temporary
                                                                 file created during a CP extend or Symphony
                                                                 renew. This file is copied to Symnew, which is
                                                                 then copied to Sinfonia and Symphony.
                          Symbad                                 (Bad Symphony) Only created if CP extend or
                                                                 Symphony renew results in an invalid
                                                                 Symphony.
                          Symold                                 (Old Symphony) From prior to most recent CP
                                                                 extend or Symphony renew.
                          Translator.wjl                         Translator event log for requested job logs.
                          Translator.chk                         Translator checkpoint file.


104        End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
YYYYMMDD_E2EMERGE.log Translator log.

                               Note: The Symnew, SymX, and Symbad files are temporary files and normally
                               cannot be seen in USS work directory.

                          Figure 2-34 shows the most important files in the Tivoli Workload Scheduler 8.2
                          binary directory in USS (BINDIR). The options files in the config subdirectory are
                          only reference copies of these files; they are not active configuration files.


  Symbol Legend                                                                Color Legend
     options & config. files                                                       Only found on E2E Server in HFS on mainframe
                                                            BINDIR                 (not found on Unix or Windows workstations)
     scripts & programs




                                  catalog    codeset         bin              config          zoneinfo


                                                                               NetConf

                                                                               globalopts

                                                                               localopts




                                                  batchman           config
                                                                                   IBM
                                                  mailman            configure

                                                  netman             translator     EQQBTCHM
                                                                                    EQQCNFG0
                                                  starter                           EQQCNFGR
                                                                                    EQQMLMN0
                                                  writer                            EQQNTMN0
                                                                                    EQQSTRTR
                                                                                    EQQTRNSL
                                                                                    EQQWRTR0

Figure 2-34 A list of the most important files in the Tivoli Workload Scheduler 8.2 binary directory in USS

                          Figure 2-35 on page 106 shows the Tivoli Workload Scheduler 8.2 directory
                          structure on the fault-tolerant agents. Note that the database files (such as jobs
                          and calendars) are not used in the Tivoli Workload Scheduler 8.2 end-to-end
                          scheduling environment.




                                                                      Chapter 2. End-to-end scheduling architecture               105
Legend

                                                                                    database file

                                                                                    option file

                                                   tws




       Security   network    parameters    bin    mozart schedlog stdlist   audit pobox version localopts




          cpudata userdata     mastsked jobs     calendars prompts resources      globalopts


Figure 2-35 Tivoli Workload Scheduler 8.2 directory structure on the fault-tolerant agents



2.7 conman commands in the end-to-end environment
                  In Tivoli Workload Scheduler, you can use the conman command line interface to
                  manage the distributed production. A subset of these commands can also be
                  used in end-to-end scheduling. In general, command options that could affect the
                  information contained in the Symphony file are not allowed. Disallowed conman
                  command options include add and remove dependencies, submit and cancel
                  jobs, and so forth.

                  Figure 2-36 on page 107 and Figure 2-37 on page 107 list the conman
                  commands that are available on end-to-end fault-tolerant workstations in a Tivoli
                  Workload Scheduler 8.2 end-to-end scheduling network. Note that in the Type
                  field, M stands for domain managers, F for fault-tolerant agents and A stands for
                  standard agents.

                    Note: The composer command line interface, which is used to manage
                    database objects in a distributed Tivoli Workload Scheduler environment, is
                    not used in end-to-end scheduling because in end-to-end scheduling, the
                    databases are located on the Tivoli Workload Scheduler for z/OS master.




106     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 2-36 conman commands available in end-to-end environment




Figure 2-37 conman commands available in end-to-end environment



                                                     Chapter 2. End-to-end scheduling architecture   107
108   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
3


    Chapter 3.    Planning end-to-end
                  scheduling with Tivoli
                  Workload Scheduler 8.2
                  In this chapter, we provide details on how to plan for end-to-end scheduling with
                  Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler, and the Job
                  Scheduling Console.

                  The chapter covers two areas:
                  1. Before the installation is performed
                      Here we describe what to consider before performing the installation and how
                      to order the product. This includes the following sections:
                      – “Different ways to do end-to-end scheduling” on page 111
                      – “The rationale behind end-to-end scheduling” on page 112
                      – “Before you start the installation” on page 113
                  2. Planning for end-to-end scheduling
                      Here we describe relevant planning issues that should be considered and
                      handled before the actual installation and customization of Tivoli Workload




© Copyright IBM Corp. 2004                                                                      109
Scheduler for z/OS, Tivoli Workload Scheduler, and Job Scheduling Console
                    is performed. This includes the following sections:
                    – “Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS”
                      on page 116
                    – “Planning for end-to-end scheduling with Tivoli Workload Scheduler” on
                      page 139
                    – “Planning for the Job Scheduling Console” on page 149
                    – “Planning for migration or upgrade from previous versions” on page 155
                    – “Planning for maintenance or upgrades” on page 156




110   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
3.1 Different ways to do end-to-end scheduling
         The ability to connect mainframe and distributed platforms into an integrated
         scheduling network is not new. Several years ago, IBM offered two methods:
            By use of Tivoli OPC tracker agents
            With tracker agents, the Tivoli Workload Scheduler for z/OS can submit and
            monitor jobs on remote tracker agents. The tracker agent software had limited
            support for diverse operating systems. Also tracker agents were not
            fault-tolerant, so if the network went down, tracker agents would not continue
            to run.
            Furthermore, the scalability for tracker agents was not good, which means
            that it simply was not possible to get a stable environment for large distributed
            environments with several hundreds of tracker agents.
            By use of Tivoli Workload Scheduler MVS extended agents
            Using extended agents, Tivoli Workload Scheduler can submit and monitor
            mainframe jobs in (for example) OPC or JES. The extended agents had
            limited functionality and were not fault tolerant. This required a Tivoli
            Workload Scheduler master and was not ideal for large, established MVS
            workloads. Extended agents, though, can be a perfectly viable solution for a
            large Tivoli Workload Scheduler that needs to run few jobs in a z/OS
            mainframe environment.

         From Tivoli Workload Scheduler 8.1, it was possible to integrate Tivoli Workload
         Scheduler agents with Tivoli Workload Scheduler for z/OS, so Tivoli Workload
         Scheduler for z/OS was the master doing scheduling and tracking for jobs in the
         mainframe environment as well as in the distributed environment.

         The end-to-end scheduling feature of Tivoli Workload Scheduler 8.1 was the first
         step toward a complete unified system.

         The end-to-end solution has been optimized in Tivoli Workload Scheduler 8.2
         where the integration between the two products, Tivoli Workload Scheduler and
         Tivoli Workload Scheduler for z/OS is even tighter.

         Furthermore, some of the functions that were missing in the first Tivoli Workload
         Scheduler 8.1 solution have been added in the Version 8.2 end-to-end solution.




                    Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   111
3.2 The rationale behind end-to-end scheduling
                As described in Section 2.3.6, “Benefits of end-to-end scheduling” on page 86,
                you can gain several benefits by using Tivoli Workload Scheduler for z/OS
                end-to-end scheduling. To review:
                    You can use fault-tolerant agents so that distributed job scheduling is more
                    independent from problems with network connections and poor network
                    performance.
                    You can schedule workload on additional operation systems such as Linux
                    and Windows 2000.
                    You have a seamless synchronization of work in mainframe and distributed
                    environments.
                    Making dependencies between mainframe jobs and jobs in distributed
                    environments is straightforward, using the same terminology and known
                    interfaces.
                    Tivoli Workload Scheduler for z/OS can use multi-tier architecture with Tivoli
                    Workload Scheduler domain managers.
                    You get extended planning capabilities, such as the use of long-term plans,
                    trial plans, and extended plans, as well as to the distributed Tivoli Workload
                    Scheduler network.
                    Extended plans means that the current plan can span more than 24 hours.
                    The powerful run-cycle and calendar functions in Tivoli Workload Scheduler
                    for z/OS can be used for distributed Tivoli Workload Scheduler jobs.

                Besides these benefits, using the Tivoli Workload Scheduler for z/OS end-to-end
                also makes it possible to:
                    Reuse or reinforce the procedures and processes that are established for the
                    Tivoli Workload Scheduler for z/OS mainframe environment.
                    Operators, planners, and administrators who are trained and experienced in
                    managing Tivoli Workload Scheduler for z/OS workload can reuse their skills
                    and knowledge in the distributed jobs managed by the Tivoli Workload
                    Scheduler for z/OS end-to-end.
                    Extend disciplines established to manage and operate workload scheduling in
                    mainframe environments, to the distributed environment.
                    Extend procedures for a contingency established for the mainframe
                    environment to the distributed environment.

                Basically, when we look at end-to-end scheduling in this book, we consider
                scheduling in the enterprise (mainframe and distributed) where the Tivoli
                Workload Scheduler for z/OS engine is the master.


112   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
3.3 Before you start the installation
         The short version of this story is: “Get the right people on board.”

         End-to-end scheduling with Tivoli Workload Scheduler is not complicated to
         implement, but it is important to understand that end-to-end scheduling can
         involve many different platforms and operating systems, will use IP
         communication, can work across firewalls, and uses SSL communication.

         As described earlier in this book, end-to-end scheduling involves two products:
         Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. These
         products must be installed and configured to work together for successful
         end-to-end scheduling. Tivoli Workload Scheduler for z/OS is installed in the
         z/OS mainframe environment, and Tivoli Workload Scheduler is installed on the
         distributed platforms where job scheduling is going to be performed.

         We suggest that you establish an end-to-end scheduling team or project group
         that includes people who are skilled in the different platforms and operating
         systems. Ensure that you have skilled people who know how IP communication,
         firewalls, and SSL work in the different environments and can configure these
         components to work in them.

         The team will be responsible for doing the planning, installation, and operation of
         the end-to-end scheduling environment, must be able to cooperate across
         department boundaries, and must understand the entire scheduling environment,
         both mainframe and distributed.

         Tivoli Workload Scheduler for z/OS administrators should be familiar with the
         domain architecture and the meaning of fault tolerant in order to understand
         that, for example, the script is not necessarily located in the job repository
         database. This is essential when it comes to reflecting the end-to-end network
         topology in Tivoli Workload Scheduler for z/OS.

         On the other hand, people who are in charge of Tivoli Workload Scheduler need
         to know the Tivoli Workload Scheduler for z/OS architecture to understand the
         new planning mechanism and Symphony file creation.

         Another important thing to plan for is education or skills transfer to planners and
         operators who will have the daily responsibilities of end-to-end scheduling. If your
         planners and operators are knowledgeable, they will be able to work more
         independently with the products and you will realize better quality.

         We recommend that all involved people (mainframe and distributed scheduling)
         become familiar with both scheduling environments as described throughout this
         book.




                    Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   113
Because end-to-end scheduling can involve different platforms and operating
                systems with different interfaces (TSO/ISPF on mainframe, command prompt on
                UNIX, and so forth) we also suggest planning to deploy of the Job Scheduling
                Console. The reason is that the JSC provides a unified and platform-independent
                interface to job scheduling, and users do not need detailed skills to handle or use
                interfaces that depend on a particular operating system.


3.3.1 How to order the Tivoli Workload Scheduler software
                The Tivoli Workload Scheduler solution consists of three products:
                    IBM Tivoli Workload Scheduler for z/OS
                    (formerly called Tivoli Operations Planning and Control, or OPC)
                    Focused on mainframe-based scheduling
                    Tivoli Workload Scheduler
                    (formerly called Maestro)
                    Focused on open systems–based scheduling and can be used with the
                    mainframe-based products for a comprehensive solution across both
                    distributed and mainframe environments
                    Tivoli Workload Scheduler for Applications
                    Enables direct, easy integration between the Tivoli Workload Scheduler and
                    enterprise applications such as Oracle E-business Suite, PeopleSoft, and
                    SAP R/3.

                Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS can be
                ordered independently or together in one program suite. The JSC graphical user
                interface is delivered together with Tivoli Workload Scheduler for z/OS and Tivoli
                Workload Scheduler. This is also the case for the connector software that makes
                it possible for the JSC to communicate with either Tivoli Workload Scheduler for
                z/OS or Tivoli Workload Scheduler.

                Example 3-1 shows each product and its included components.
                Table 3-1 Product and components
                  Components                   IBM Tivoli              Tivoli Workload   Tivoli Workload
                                               Workload                Scheduler 8.2     Scheduler 8.2
                                               Scheduler for                             for Applications
                                               z/OS 8.2

                  z/OS engine (OPC                      X
                  Controller and Tracker)

                  Tracker agent enabler                 X

                  End-to-end enabler                    X




114   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Components                  IBM Tivoli             Tivoli Workload       Tivoli Workload
                             Workload               Scheduler 8.2         Scheduler 8.2
                             Scheduler for                                for Applications
                             z/OS 8.2

 Tivoli Workload                                            X
 Scheduler distributed
 (Maestro)

 Tivoli Workload                                            X
 Scheduler Connector

 IBM Tivoli Workload                  X
 Scheduler for z/OS
 Connector

 Job Scheduling Console               X                     X

 IBM Tivoli Workload                                                               X
 Scheduler for
 Applications for z/OS
 (Tivoli Workload
 Scheduler extended
 agent for z/OS)

Note that the end-to-end enabler component (FMID JWSZ203) is used to
populate the base binary directory in an HFS during System Modification
Program/Extended (SMP/E) installation.

The tracker agent enabler component (FMID JWSZ2C0) makes it possible for the
Tivoli Workload Scheduler for z/OS controller to communicate with old Tivoli
OPC distributed tracker agents.

 Attention: The Tivoli OPC distributed tracker agents went out of support
 October 31, 2003.

To be able to use the end-to-end scheduling solution you should order both
products: IBM Tivoli Workload Scheduler for z/OS and Tivoli Workload
Scheduler. In the following section, we list the ordering details.

Contact your IBM representative if you have any problems ordering the products
or are missing some of the delivery or components.

Software ordering details
Table 3-2 on page 116 shows ordering details for Tivoli Workload Scheduler for
z/OS and Tivoli Workload Scheduler.



           Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   115
Table 3-2 Ordering details
                  Component               IBM Tivoli             IBM Tivoli          Tivoli Workload
                  Delivery                Workload               Workload            Scheduler 8.2
                  Comments                Scheduler for          Scheduler for
                  Program number          z/OS 8.2               z/OS Host Edition

                  z/OS Engine                Yes, optional               Yes

                  z/OS Agent                 Yes, optional               Yes

                  End-to-end                 Yes, optional               Yes
                  Enabler

                  Distributed FTA                                                           Yes

                  JSC                             Yes                    Yes                Yes

                  Delivery                  Native tape,           ServicePac® or     CD-ROM for all
                                           Service Pack or            CBPDO             distributed
                                              CBPDO                                     platforms

                  Comments                   The 3 z/OS               All 3 z/OS
                                           components can          components are
                                           be licensed and          included when
                                              delivered           customer buy and
                                             individually             get deliver

                  Program number              5697-WSZ                 5698-WSH          5698-A17


3.3.2 Where to find more information for planning
                Besides this redbook, you can find more information in IBM Tivoli Workload
                Scheduling Suite General Information Version 8.2, SC32-1256. This manual is a
                good place to start to learn more about Tivoli Workload Scheduler, Tivoli
                Workload Scheduler for z/OS, the JSC, and end-to-end scheduling.



3.4 Planning end-to-end scheduling with Tivoli
Workload Scheduler for z/OS
                Before installing the Tivoli Workload Scheduler for z/OS and activating the
                end-to-end scheduling feature, there are several areas to consider and plan for.
                These areas are described in the following sections.




116   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
3.4.1 Tivoli Workload Scheduler for z/OS documentation
           Tivoli Workload Scheduler for z/OS documentation is not shipped in hardcopy
           form with IBM Tivoli Workload Scheduler for z/OS 8.2.

           The books are available in PDF and IBM softcopy format and delivered on a
           CD-ROM with the Tivoli Workload Scheduler for z/OS product. The CD-ROM has
           part number SK2T-6951 and can also be ordered separately.

           Several of the Tivoli Workload Scheduler for z/OS books have been updated or
           revised starting in April 2004. This means that the books that are delivered with
           the base product are outdated, and we strongly suggest that you confirm that you
           have the newest versions of the books before starting the installation. This is true
           even for Tivoli Workload Scheduler for z/OS 8.2.

            Note: The publications are available for download in PDF format at:
            http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html

            Look for books marked with “Revised April 2004,” as they have been updated
            with changes introduced by service (APARs and PTFs) for Tivoli Workload
            Scheduler for z/OS produced after the base version of the product was
            released in June 2003.

           We recommend that you have access to, and possibly print, the newest versions
           of the Tivoli Workload Scheduler for z/OS publications before starting the
           installation.

           Tivoli OPC tracker agents
           Although the distributed Tivoli OPC tracker agents are not supported and cannot
           be ordered any more, Tivoli Workload Scheduler for z/OS 8.2 can still
           communicate with these tracker agents, because the agent enabler software
           (FMID JWSZ2C0) is delivered with Version 8.2.

           However, the Version 8.2 manuals do not describe the related TCP or APPC
           ROUTOPTS initialization statement parameters. If you are going to use Tivoli
           OPC tracker agents with Version 8.2, then save the related Tivoli OPC
           publications, so you can use them for reference when necessary.


3.4.2 Service updates (PSP bucket, APARs, and PTFs)
           Before starting the installation, be sure to check the service level for the Tivoli
           Workload Scheduler for z/OS that you have received from IBM, and make sure
           that you get all available service so it can be installed with Tivoli Workload
           Scheduler for z/OS.


                      Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   117
Because the period from the time that installation of Tivoli Workload Scheduler
                for z/OS is started until it is activated in your production environment can be
                several months, we suggest that the installed Tivoli Workload Scheduler for z/OS
                be updated with all service that is available at the installation time.

                Preventive service planning (PSP)
                The Program Directory that is provided with your Tivoli Workload Scheduler for
                z/OS distribution tape is an important document that may include technical
                information that is more recent than the information provided in this section. It
                also describes the program temporary fix (PTF) level of the Tivoli Workload
                Scheduler for z/OS licensed program when it was shipped from IBM, and
                contains instructions for unloading the software and information about additional
                maintenance for your level of the received distribution tape for the z/OS
                installation.

                Before you start installing Tivoli Workload Scheduler for z/OS, check the
                preventive service planning bucket for recommendations that may have been
                added by the service organizations after your Program Directory was produced.
                The PSP includes a recommended service section that includes high-impact or
                pervasive (HIPER) APARs. Ensure that the corresponding PTFs are installed
                before you start to customize a Tivoli Workload Scheduler for z/OS subsystem.

                Table 3-3 gives the PSP information for Tivoli Workload Scheduler for z/OS to be
                used when ordering the PSP bucket.
                Table 3-3 PSP upgrade and subset ID information
                  Upgrade                         Subset               Description

                  TWSZOS820                       HWSZ200              Agent for z/OS

                                                  JWSZ202              Engine (Controller)

                                                  JWSZ2A4              Engine English NLS

                                                  JWSZ201              TCP/IP communication

                                                  JWSZ203              End-to-end enabler

                                                  JWSZ12C0             Agent enabler

                  Important: If you are running a previous version of IBM Tivoli Workload
                  Scheduler for z/OS or OPC on a system where the JES2 EXIT2 was
                  assembled using the Tivoli Workload Scheduler for z/OS 8.2 macros, apply
                  the following PTFs to avoid job tracking problems due to missing A1 and A3P
                  records:
                      Tivoli OPC 2.3.0: Apply UQ66036 and UQ68474.
                      IBM Tivoli Workload Scheduler for z/OS 8.1: Apply UQ67877.


118   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Important service for Tivoli Workload Scheduler for z/OS
Besides the APARs and PTFs that are listed in the PSP bucket, we suggest that
you plan to apply all available service for Tivoli Workload Scheduler for z/OS in
the installation phase.

At the time of writing this book, we found several important APARs for Tivoli
Workload Scheduler for z/OS end-to-end scheduling and have listed some of
them in Table 3-4. The table also shows whether the corresponding PTFs were
available when this book was written (the number in the PTF number column).

 Note: The APAR list in Table 3-4 is not complete, but is used to give some
 examples of important service to apply during the installation. As mentioned
 before, we strongly suggest that you apply all available service during your
 installation of Tivoli Workload Scheduler for z/OS.

Table 3-4 Important service
 APAR number         PTF number            Description

 PQ76474             UQ81495               Checks for number of dependencies for FTW
                     UQ81498               job and two new messages, EQQX508E and
                                           EQQ3127E, to indicate that FTW job cannot be
                                           added to AD or CP (Symphony file) due to
                                           more than 40 dependencies for this job.

 PQ77014             UQ81476               During the daily planning or Symphony renew,
                     UQ81477               the batch job ends with RC=0 even though
                                           warning messages have been issued for the
                                           Symphony file.

 PQ77535             Not available         Important documentation with additional
 Doc. APAR                                 information when creating and maintaining
                                           HFS files needed for Tivoli Workload
                                           Scheduler end-to-end processing.

 PQ77970             UQ82583               Makes it possible to customize the job name in
                     UQ82584               the Symphony file.
                     UQ82585               Before the fix, the job name was always
                     UQ82587               generated using the operation number and
                     UQ82579               occurrence name. Now it can be customized.
                     UQ82601               The EQQPDFXJ member in the SEQQMISC
                     UQ82602               library holds a detailed description (see
                                           Chapter 4, “Installing IBM Tivoli Workload
                                           Scheduler 8.2 end-to-end scheduling” on
                                           page 157 for more information).




           Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   119
APAR number          PTF number            Description

                  PQ78043              UQ81567               64M is recommended as the minimum region
                                                             size for an E2E server; however, the sample
                                                             server JCL (member EQQSER in
                                                             SEQQSAMP) still has REGION=6M. This
                                                             should be changed to REGION=64M.

                  PQ78097              Not available         Better documentation of the WSSTAT
                  Doc. APAR                                  MANAGES keyword.

                  PQ78356              UQ82697               When a job stream is added via MCP to the
                                                             Symphony file, it is always added with the GMT
                                                             time; therefore in this case the local timezone
                                                             set for the FTA is completely ignored.

                  PQ78891              UQ82790               Introduces new messages in the server
                                       UQ82791               message log when USS processes end
                                       UQ82784               abnormally or unexpectedly. Important for
                                       UQ82793               monitoring of the server and USS processes.
                                       UQ82794               Updates server related messages in controller
                                                             message log to be more precise.

                  PQ79126              Not available         In the documentation, any reference to ZFS
                  Doc. APAR                                  files is missing. The Tivoli Workload Scheduler
                                                             end-to-end server fully supports and can
                                                             access UNIX System Services (USS) in a
                                                             Hierarchical File System (HFS) or in a
                                                             zSeries® File System (zFS) cluster.

                  PQ79875              Not available         If you have any fault-tolerant workstations on
                  Doc. APAR                                  Windows supported platforms and you want to
                                                             run jobs on these workstations, you must
                                                             create a member containing all users and
                                                             passwords for Windows users who need to
                                                             schedule jobs to run on Windows workstations.
                                                             The Windows users are described using
                                                             USRREC initialization statements.

                  PQ80229              Not available         In the IBM Tivoli Workload Scheduler for z/OS
                  Doc. APAR                                  Installation Guide, the description of the
                                                             end-to-end Input and Output Events Data Sets
                                                             (EQQTWSIN and EQQTWSOU) is misleading
                                                             because it states that the LRECL for these files
                                                             can be anywhere from 120 to 32000 bytes. In
                                                             reality, the LRECL must be 120. Defining a
                                                             larger LRECL causes a waste of disk space,
                                                             which can lead to problems if the EQQTWSIN
                                                             and EQQTWSOU files fill up completely.
                                                             Also see text in APAR PQ77970.



120   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
APAR number         PTF number            Description

PQ80341             UQ88867               End-to-end: Missing synchronization process
                    UQ88868               between event manager and receiver tasks at
                    UQ88869               Controller startup.
                                          Several new messages are introduced by this
                                          APAR (documented in EQQPDFEM member in
                                          SEQQMISC data).

PQ81405             UQ82765               Checks for number of dependencies for FTW
                    UQ82766               job and new message, EQQG016E, to indicate
                                          that FTW job cannot be added to CP due to
                                          more than 40 dependencies for this job.

PQ84233             UQ87341               Implements support for Tivoli Workload
                    UQ87342               Scheduler for z/OS commands: NP (NOP), UN
                    UQ87343               (UN-NOP), EX (Execute), and for the “submit”
                    UQ87344               automatic option for operations defined on
                    UQ87345               fault-tolerant workstations.
                    UQ87377               Also introduces a new TOPOLOGY
                                          NOPTIMEDEPENDENCY (YES/NO)
                                          parameter.

PQ87120             UQ89138               Porting of Tivoli Workload Scheduler 8.2
                                          FixPack 04 to end-to-end feature on z/OS.
                                          With this APAR the Tivoli Workload Scheduler
                                          for z/OS 8.2 end-to-end code has been aligned
                                          with the Tivoli Workload Scheduler distributed
                                          code FixPack 04 level.
                                          This APAR also introduces the Backup Domain
                                          Fault Tolerant feature in the end-to-end
                                          environment.

PQ87110             UQ90485               The Tivoli Workload Scheduler end-to-end
                    UQ90488               server is not able to get mutex lock if
                                          mountpoint of a shared HFS is moved without
                                          stopping the server. Also it contains a very
                                          important documentation update that
                                          describes how to configure the end-to-end
                                          server work directory correctly in an sysplex
                                          environment with hot stand-by controllers.


Note: To learn about updates to the Tivoli Workload Scheduler for z/OS books
and the APARs and PTFs that pre-date April 2004, consult “April 2004
Revised” versions of the books, as mentioned in 3.4.1, “Tivoli Workload
Scheduler for z/OS documentation” on page 117.




          Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   121
Special documentation updates introduced by service
                Some APARs were fixed on Tivoli Workload Scheduler for z/OS 8.1 while the
                general availability code for Tivoli Workload Scheduler for z/OS 8.2 was frozen
                because of shipment. All these fixes or PTFs are sysrouted through level set
                APAR PQ74854 (also described as hiper cumulative APAR).

                This cumulative APAR is meant to align the Version 8.2 code with the
                maintenance level that was reached during the time the GA code was frozen.

                With APAR PQ74854, the documentation has been updated and is available in a
                PDF file. To access the changes described in this PDF file:
                    Apply the PTF for APAR PQ74854.
                    Transfer the EQQPDF82 member from the SEQQMISC library on the
                    mainframe to a file on your personal workstation. Remember to transfer using
                    the binary transfer type. The file extension must be pdf.
                    Read the document using Adobe (Acrobat) Reader.

                APAR PQ77970 (see Table 3-4 on page 119) makes it possible to customize how
                the job name in the Symphony file is generated. The PTF for APAR PQ77970
                installs a member, EQQPDFXJ, in the SEQQMISC library. This member holds a
                detailed description of how the job name in the Symphony file can be customized
                and how to specify the related parameters. To read the documentation in the
                EQQPDFXJ member:
                    Apply the PTF for APAR PQ77970.
                    Transfer the EQQPDFXJ member from the SEQQMISC library on the
                    mainframe to a file on your personal workstation. Remember to transfer using
                    the binary transfer type. The file extension must be pdf.
                    Read the document using Adobe Reader.

                APAR PQ84233 (see Table 3-4 on page 119) implements support for Tivoli
                Workload Scheduler for z/OS commands for fault-tolerant agents and introduces
                a new TOPLOGY NOPTIMEDEPENDENCY(Yes/No) parameter. The PTF for
                APAR PQ84233 installs a member, EQQPDFNP, in the SEQQMISC library. This
                member holds a detailed description of the supported commands and the
                NOPTIMEDEPENDENCY parameter. To read the documentation in the
                EQQPDFNP member:
                    Apply the PTF for APAR PQ84233.
                    Transfer the EQQPDFNP member from the SEQQMISC library on the
                    mainframe to a file on your personal workstation. Remember to transfer using
                    the binary transfer type. The file extension must be pdf.
                    Read the document using Adobe Reader.



122   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Note: The documentation updates that are described in the EQQPDF82,
            EQQPDFXJ, and EQQPDFNP members in SEQQMISC are in the “April 2004
            Revised” versions of the Tivoli Workload Scheduler for z/OS books, mentioned
            in 3.4.1, “Tivoli Workload Scheduler for z/OS documentation” on page 117.

           APAR PQ80341 (see Table 3-4 on page 119) improves the synchronization
           process between the controller event manager and receiver tasks. The APAR
           also introduces several new or updated messages. The PTF for APAR PQ80341
           installs a member, EQQPDFEM, in the SEQQMISC library. This member holds a
           detailed description of the new or updated messages related to the improved
           synchronization process. To read the documentation in the EQQPDFEM
           member:
              Apply the PTF for APAR PQ80341.
              Transfer the EQQPDFEM member from the SEQQMISC library on the
              mainframe to a file on your personal workstation. Remember to transfer using
              the binary transfer type. The file extension must be.pdf.
              Read the document using Adobe Reader.

           APAR PQ87110 (see Table 3-4 on page 119) contains important documentation
           updates with suggestions on how to define the end-to-end server work directory
           in a SYSPLEX shared HFS environment and a procedure to be followed before
           starting a scheduled shutdown for a system in the sysplex.

           The PTF for APAR PQ87110 installs a member, EQQPDFSY, in the SEQQMISC
           library. This member holds the documentation updates. To read the
           documentation in the EQQPDFSY member:
              Apply the PTF for APAR PQ87110.
              Transfer the EQQPDFEM member from the SEQQMISC library on the
              mainframe to a file on your personal workstation. Remember to transfer using
              the binary transfer type. The file extension must be .pdf.
              Read the document using Adobe Reader.


3.4.3 Tivoli Workload Scheduler for z/OS started tasks for end-to-end
scheduling
           As described in the architecture chapter, end-to-end scheduling involves at least
           two started tasks: the Tivoli Workload Scheduler for z/OS controller and the Tivoli
           Workload Scheduler for z/OS server.




                      Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   123
The server started task will do all communication with the distributed
                fault-tolerant agents and will handle updates (for example, to the Symphony file).
                The server task must always run on the same z/OS system as the active
                controller task.

                In Tivoli Workload Scheduler for z/OS 8.2, it is possible to configure one server
                started task that can handle end-to-end scheduling, communication with JSC
                users, and APPC communication.

                Even though it is possible to configure one server started task, we strongly
                suggest using a dedicated server started task for the end-to-end scheduling.

                Using dedicated started tasks with dedicated responsibilities makes it possible,
                for example, to restart the JSC server started task without any impact on the
                scheduling in the end-to-end server started task.

                Although it is possible to run end-to-end scheduling with the Tivoli Workload
                Scheduler for z/OS ISPF interface, we suggest that you plan for use of the Job
                Scheduling Console (JSC) graphical user interface. Users with background in the
                distributed world will find the JSC much easier to use than learning a new
                interface such as TSO/ISPF to manage their daily work. Therefore we suggest
                planning for implementation of a server started task that can handle the
                communication with the JSC Connector (JSC users).


3.4.4 Hierarchical File System (HFS) cluster

                  Terminology note: An HFS data set is a z/OS data set that contains a
                  POSIX-compliant hierarchical file system, which is a collection of files and
                  directories organized in a hierarchical structure that can be accessed using
                  the z/OS UNIX system services (USS).


                Tivoli Workload Scheduler code has been ported into UNIX System Services
                (USS) on z/OS. When planning for the end-to-end scheduling with Tivoli
                Workload Scheduler for z/OS, keep in mind that the server starts multiple tasks
                and processes using the USS in z/OS. The end-to-end server accesses the code
                delivered from IBM and creates several work files in Hierarchical File System
                clusters.

                Because of this, the z/OS USS function must be active in the z/OS environment
                before you can install and use the end-to-end scheduling feature in Tivoli
                Workload Scheduler for z/OS.




124   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
The Tivoli Workload Scheduler code is installed with SMP/E in an HFS cluster in
USS. It can be installed in an existing HFS cluster or in a dedicated HFS cluster,
depending on how the z/OS USS is configured.

Besides the installation binaries delivered from IBM, the Tivoli Workload
Scheduler for z/OS server also needs several work files in a USS HFS cluster.
We suggest that you use a dedicated HFS cluster to the server workfiles. If you
are planning to install several Tivoli Workload Scheduler for z/OS end-to-end
scheduling environments, you should allocate one USS HFS cluster for workfiles
per end-to-end scheduling environment.

Furthermore, if the z/OS environment is configured as a sysplex, where the Tivoli
Workload Scheduler for z/OS server can be active on different z/OS systems
within the sysplex, you should make sure that the USS HFS clusters with Tivoli
Workload Scheduler for z/OS binaries and workfiles can be accessed from all of
the sysplex’s systems. Starting from OS/390 Version 2 Release 9, it is possible to
mount USS HFS clusters either in read-only mode or in read/write mode on all
systems in a sysplex.

The USS HFS cluster with the Tivoli Workload Scheduler for z/OS binaries
should then be mounted in read mode on all systems and the USS HFS cluster
with the Tivoli Workload Scheduler for z/OS work files should be mounted in
read/write mode on all systems in the sysplex.

Figure 3-1 on page 126 illustrates the use of dedicated HFS clusters for two
Tivoli Workload Scheduler for z/OS environments: test and production.




           Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   125
Workfiles
                             Production environment for server:        HFS DSN: OMVS.TWSCPROD.HFS
                                                                       Mount point: /TWS/TWSCPROD
                                                                       Mounted Read/Write (on all systems)
                             * WRKDIR('/TWS/TWSCPROD')
                                                                              Installation Binaries

                             * BINDIR('/TWS/PROD/bin820’)              HFS DSN: OMVS.PROD.TWS820.HFS
                                                                       Mount point: /TWS/PROD/bin820
                                                                       Mounted Read Only (on all systems)




                                                                                    Workfiles
                             Test environment for server:              HFS DSN: OMVS.TWSCTEST.HFS
                                                                       Mount point: /TWS/TWSCTEST
                                                                       Mounted Read/Write (on all systems)
                             * WRKDIR('/TWS/TWSCTEST')
                                                                              Installation Binaries

                             * BINDIR('/TWS/TEST/bin820’)              HFS DSN: OMVS.TEST.TWS820.HFS
                                                                       Mount point: /TWS/TEST/bin820
                                                                       Mounted Read Only (on all systems)




                Figure 3-1 Dedicated HFS clusters for Tivoli Workload Scheduler for z/OS server test and
                production environment


                  Note: IBM Tivoli Workload Scheduler for z/OS 8.2 supports zFS (z/OS File
                  System) clusters as well as HFS clusters (APAR PQ79126). Because zFS
                  clusters offers significant performance improvements over HFS, we suggest
                  considering use of zFS clusters instead of HFS clusters. For this redbook, we
                  used HFS clusters in our implementation.

                We recommend that you create a separate HFS cluster for the working directory,
                mounted in read/write mode. This is because the working directory is application
                specific and contains application-related data. It also makes your backup easier.
                The size of the cluster depends on the size of the Symphony file and how long
                you want to keep the stdlist files. We recommend starting with at least 2 GB of
                space.

                We also recommend that you plan to have separate HFS clusters for the binaries
                if you have more than one Tivoli Workload Scheduler end-to-end scheduling
                environment, as shown in Figure 3-1. This makes it possible to apply and test
                maintenance and test it in the test environment before it is populated to the
                production environment.



126   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
As mentioned earlier, OS/390 2.9 and higher support use of shared HFS
           clusters. Some directories (usually /var, /dev, /etc, and /tmp) are system specific,
           meaning that those paths are logical links pointing to different directories. When
           you specify the work directory, make sure that it is not on a system-specific
           filesystem. Or, if this is the case, make sure that the same directories on the
           filesystem of the other systems are pointing to the same directory. For example,
           you can use /u/TWS; that is not system-specific. Or you can use /var/TWS on
           system SYS1 and create a symbolic link /SYS2/var/TWS to /SYS1/var/TWS so
           that /var/TWS will point to the same directory on both SYS1 and SYS2.

           If you are using OS/390 versions earlier than Version 2.9 in a sysplex, the HFS
           cluster with the work files and binaries should be mounted manually on the
           system where the server is active. If the server is going to be moved to another
           system in the sysplex, the HFS clusters should be unmounted from the first
           system and mounted on the system where the server is going to be active. On
           the new system, the HFS cluster with work files should be mounted in read/write
           mode, and the HFS cluster with the binaries should be mounted in read mode.
           The filesystem can be mounted in read/write mode on only one system at a time.

            Note: Please also check documentation updates in APAR PQ87110 (see table
            3-4 on page 118) if you are planning to use shared HFS with work directory for
            the end-to-end server. The PTFs for this APAR contain important
            documentation updates with suggestions on how to define the end-to-end
            server work directory in a SYSPLEX shared HFS environment and a
            procedure to be followed before starting a scheduled shutdown for a system in
            the sysplex.


           Migrating from IBM Tivoli Workload Scheduler for z/OS 8.1
           If you are migrating from Tivoli Workload Scheduler for z/OS 8.1 to Tivoli
           Workload Scheduler for z/OS 8.2 and you are using end-to-end scheduling in the
           8.1 environment, we suggest that you allocate new dedicated USS HFS clusters
           for the Tivoli Workload Scheduler for z/OS 8.2 work files and installation binaries.


3.4.5 Data sets related to end-to-end scheduling
           Tivoli Workload Scheduler for z/OS has several data sets that are dedicated for
           end-to-end scheduling:
              End-to-end input and output data sets (EQQTWSIN and EQQTWSOU).
              These data sets are used to send events from controller to server and from
              server to controller. They must be defined in the controller and end-to-end
              server started task procedure.
              Current plan backup copy data set to create Symphony (EQQSCPDS). This is
              a VSAM data set used as a CP backup copy for the production of the


                      Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   127
Symphony file in USS. It must be defined in controller started task procedure
                    and in the current plan extend job, the current plan replan job, and the
                    Symphony renew job.
                    End-to-end script library (EQQSCLIB) is a partitioned data set that holds
                    commands or job definitions for fault-tolerant agent jobs. This must be defined
                    in the controller started task procedure and in the current plan extend job,
                    current plan replan job, and the Symphony renew job.
                    End-to-end centralized script data set (EQQTWSCS). This is a partitioned
                    data set that holds scripts for fault-tolerant agents jobs while they are sent to
                    the agent. It must be defined in controller and end-to-end server started task.

                Plan for the allocation of these data sets and remember to specify the data sets
                in controller and end-to-end server started task procedures as required as well
                as in the current plan extend job, the replan job, and the Symphony renew job as
                required.

                In the planning phase you should also consider whether your installation will use
                centralized scripts, non-centralized (local) scripts, or a combination of centralized
                and non-centralized scripts.
                    Non-centralized (local) scripts
                    – In Tivoli Workload Scheduler for z/OS 8.2, it is possible to have job
                      definitions in the end-to-end script library and have the script (the job) be
                      executed on the fault-tolerant agent. This is referred to as a
                      non-centralized script.
                    – Using non-centralized scripts makes it possible for the fault-tolerant agent
                      to run local jobs without any connection to the controller on mainframe.
                    – On the other hand, if the non-centralized script should be updated, it must
                      be done locally on the agent.
                    – Local placed scripts can be consolidated in a central repository placed on
                      the mainframe or on a fault-tolerant agent; then on a daily basis, changed
                      or updated scripts can be distributed to the FTAs where they will be
                      executed. By doing this, you can keep all scripts in a common repository.
                      This facilitates easy modification of scripts, because you only have to
                      change the scripts in one place. We recommend this option because it
                      gives most of the benefits of using centralized scripts without sacrificing
                      fault tolerance.
                    Centralized scripts
                    – Another possibility in Tivoli Workload Scheduler for z/OS 8.2 is to have the
                      scripts on the mainframe. The scripts will then be defined in the controller
                      job library and, via the end-to-end server, the controller will send the script
                      to the fault-tolerant agent when jobs are ready to run.



128   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
– This makes it possible to centrally manage all scripts.
              – However, it compromises the fault tolerance in the end-to-end scheduling
                network, because the controller must have a connection to the
                fault-tolerant agent to be able to send the script.
              – The centralized script function makes migration from Tivoli OPC tracker
                agents with centralized scripts to end-to-end scheduling much simpler.
              Combination of non-centralized and centralized scripts
              – The third possibility is to use a combination of non-centralized and
                centralized scripts.
              – Here the decision can be made based on such factors as:
                  •   Where a particular FTA is placed in the network
                  •   How stable the network connection is to the FTA
                  •   How fast the connection is to the FTA
                  •   Special requirements for different departments to have dedicated
                      access to their scripts on their local FTA
              – For non-centralized scripts, it is still possible to have a centralized
                repository with the scripts and then, on a daily basis, to distribute changed
                or updated scripts to the FTAs with non-centralized scripts.


3.4.6 TCP/IP considerations for end-to-end server in sysplex
           In Tivoli Workload Scheduler end-to-end scheduling, the TCP/IP protocol is used
           to communicate between the end-to-end server task and the domain managers
           at the first level.

           The fault-tolerant architecture in the distributed network has the advantage that
           the individual FTAs in the distributed network can continue their own processing
           during a network failure. If there is no connection to the controller on the
           mainframe, the domain managers at the first level will buffer their events in a local
           file called tomaster.msg. This buffering continues until the link to the end-to-end
           server is re-established. If there is no connection between the domain managers
           at the first level and the controller on the mainframe side, dependencies between
           jobs on the mainframe and jobs in the distributed environment cannot be
           resolved. You cannot schedule these jobs before the connection is
           re-established.

           If the connection is down when, for example, a new plan is created, this new plan
           (the new Symphony file) will not be distributed to the domain managers at the
           first level and further down in the distributed network.




                       Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   129
In the planning phase, consider what can happen:
                    When the z/OS system with the controller and server tasks fails
                    When the controller or the server task fails
                    When the z/OS system with the controller has to be stopped for a longer time
                    (for example, due to maintenance).

                The goal is to make the end-to-end server task and the controller task as fail-safe
                as possible and to make it possible to move these tasks from one system to
                another within a sysplex without any major disruption in the mainframe and
                distributed job scheduling.

                As explained earlier, the end-to-end server is a started task that must be running
                on the same z/OS system as the controller. The end-to-end server handles all
                communication with the controller task and the domain managers at the first level
                in the distributed Tivoli Workload Scheduler distributed network.

                One of the main reasons to configure the controller and server task in a sysplex
                environment is to make these tasks as fail-safe as possible. This means that the
                tasks can be moved from one system to another within the same sysplex without
                any stop in the batch scheduling. The controller and server tasks can be moved
                as part of planned maintenance or in case a system fails. Handling of this
                process can be automated and made seamless for the user by using the Tivoli
                Workload Scheduler for z/OS Hot Standby function.

                The problem running end-to-end in a z/OS sysplex and trying to move the
                end-to-end server from one system to another is that the end-to-end server by
                default gets the IP address from the TCP/IP stack of the z/OS system where it is
                started. If the end-to-end server is moved to another z/OS system within the
                sysplex it normally gets another IP address (Figure 3-2).




130   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
IP-address 2
  Standby                               Standby
  Controller                            Controller         Active
                                                           Controller
                                                                                               Standby
                              z/OS
                                                           Server                              Controller
                            SYSPLEX
                                                                                       z/OS
               Active                                                                SYSPLEX
               Controller

               Server
                                                                        Standby
                   IP-address 1                                         Controller




  1.    Active controller and server on                    2.     Active Engine is moved to another
        one z/OS system in the sysplex                            system in the z/OS sysplex.

        Server has a “system                                      Server gets a new “system dependent”
        dependent” IP-address                                     IP-address.

                                                                  Can cause problems for FTA
                                                                  connections because IP address is in
                                                                  Symphony file


Figure 3-2 Moving one system to another within a z/OS sysplex

When the end-to-end server starts, it looks in the topology member to find its
host name or IP address and port number. In particular, the host name or IP
address is:
   Used to identify the socket from which the server receives and sends data
   from and to the distributed agents (domain managers at the first level)
   Stored in the Symphony file and is recognized by the distributed agents as the
   IP address (or host name) of the master domain manager (OPCMASTER)

If the host name is not defined or the default is used, the end-to-end server by
default will use the host name that is returned by the operating system (that is,
the host name returned by the active TCP/IP stack on the system).

The port number and host name will be inserted in the Symphony file when a
current plan extend or replan batch job is submitted or a Symphony renew is
initiated in the controller task. The Symphony file will then be distributed to the
domain managers at the first level. The domain managers at the first level in turn
use this information to link back to the server.




                   Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2         131
MASTERDM
                                                                      z/OS Sysplex
                            Standby                                                                                   Standby
                            Controller                    Active Controller           z/OS                            Controller
                                                                 &                    wtsc64
                                          z/OS                 Server                 9.12.6.9              z/OS
                                         wtsc63                                                            wtsc65
                                         9.12.6.8                                                         9.12.6.10


                                                                                                            SSL
                  UK                                        Europe                                                                  Nordic

                       Domain            AIX                        Domain          Windows 2000      Domain               AIX
                       Manager           london                     Manager         geneva            Manager              stockholm
                        U000             9.3.4.63                    E000           9.3.4.185          N000                9.3.4.47

                                                                                                              Firewall & Router



                     FTA       AIX       FTA        W2K          FTA          AIX   FTA      W2K   FTA        W2K           FTA          Linux
                     U001                U002                    E001               E002           N001                     N002

                         belfast            edinburgh                 rome            amsterdam         oslo                        helsinki
                        9.3.4.64            9.3.4.188               9.3.4.122          9.3.4.187     10.2.3.184                    10.2.3.190
                                                                                                              FTA        W2K
                                                                                                              N003
                                                                                                                copenhagen
                                                                                                                 10.2.3.189


                Figure 3-3 First-level domain managers connected to Tivoli Workload Scheduler for z/OS
                server in z/OS sysplex

                If the z/OS controller fails on the wtsc64 system (see Figure 3-3), the standby
                controller either on wtsc63 or wtsc65 can take over all of the engine functions
                (run the controller and the end-to-end server tasks). Which controller takes over
                depends on how the standby controllers are configured.

                The domain managers of first level (london, geneva, and stockholm in Figure 3-3
                on page 132) know wtsc64 as their master domain manager (from the Symphony
                file), so the link from the domain managers to the end-to-end server will fail, no
                matter which system (wtsc63 or wtsc65) the controller takes over on. One
                solution could be to send a new Symphony file (renew the Symphony file) from
                the controller and server that has taken over the domain managers of first level.

                Doing a renew of the Symphony file on the new controller and server recreates
                the Symphony file and adds the new z/OS host name or IP address (read from
                the topology definition or returned by the z/OS operating system) to the
                Symphony file. The domain managers then use this information to reconnect to
                the server on the new z/OS system.

                Since renewing the Symphony file can be disruptive, especially in a heavily
                loaded scheduling environment, we explain three alternative strategies that can



132   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
be used to solve the reconnection problem after the server and controller have
been moved to another system in a sysplex.

For all three alteratives, the topology member is used to specify the host name
and port number for the Tivoli Workload Scheduler for z/OS server task. The host
name is copied to the Symphony file when the Symphony file is renewed or the
Tivoli Workload Scheduler for z/OS current plan is extended or replanned. The
distributed domain managers at the first level will use the host name read from
the Symphony file to connect to the end-to-end server.

Because the first-level domain managers will try to link to the end-to-end server
using the host name that is defined in the server hostname parameter, you must
take the required action to successfully establish a reconnection. Make sure that
the host name always resolves correctly to the IP address of the z/OS system
with the active end-to-end server. This can be acquired in different ways.

In the following three sections, we have described three different ways to handle
the reconnection problem when the end-to-end server is moved from one system
to another in the same sysplex.

Use of the host file on the domain managers at the first level
To be able to use the same host name after a fail-over situation (where the
engine is moved to one of its backup engines) and gain additional flexibility, we
will use a host name that always can be resolved to the IP address of the z/OS
system with the active end-to-end server. The resolution of the host name is
done by the first-level domain managers using their local host files to get the IP
address of the z/OS system with the end-to-end server.

In the end-to-end server topology we can define a host name with a given name
(such as TWSCE2E). This host name will be associated with an IP address by
the TCP/IP stack, for example in the USS /etc/hosts file, where the end-to-end
server is active.

The different IP addresses of the systems where the engine can be active are
defined in the host name file (/etc/hosts on UNIX) on the domain managers at the
first level, as in Example 3-1.
Example 3-1 hosts file
9.12.6.8 wtsc63.itso.ibm.com
9.12.6.9 wtsc64.itso.ibm.com TWSCE2E
9.12.6.10 wtsc65.itso.ibm.com


If the server is moved to the wtsc63 system, you only have to edit the hosts file
on the domain managers at the first level, so TWSCE2E now points to the new
system as in Example 3-2.



           Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   133
Example 3-2 hosts file
                9.12.6.8 wtsc63.itso.ibm.com TWSCE2E
                9.12.6.9 wtsc64.itso.ibm.com
                9.12.6.10 wtsc65.itso.ibm.com


                This change takes effect dynamically (the next time the domain manager tries to
                reconnect to the server).

                One major disadvantage with this solution is that the change must be carried out
                by editing a local file on the domain managers at the first level. A simple move of
                the tasks on mainframe will then involve changes on distributed systems as well.
                In our example in Figure 3-3 on page 132, the local host file should be edited on
                three domain managers at the first level (the london, geneva and stockholm
                servers).

                Furthermore, localopts nm ipvalidate must be set to none on the agent, because
                the node name and IP address for the end-to-end server, which are stored for the
                OPCMASTER workstation (the workstation representing the end-to-end server)
                in the Symphony file on the agent, has changed. See the IBM Tivoli Workload
                Scheduler Planning and Installation Guide, SC32-1273 for further information.

                Use of stack affinity on the z/OS system
                Another possibility is to use stack affinity to ensure that the end-to-end server
                host name resolves to the same IP address, even if the end-to-end server is
                moved to another z/OS system in the sysplex.

                With stack affinity, the end-to-end server host name will always be resolved using
                the same TCP/IP stack (the same TCP/IP started task) and hence always get the
                same IP address, regardless of which z/OS system the end-to-end server is
                started on.

                Stack affinity provides the ability to define which specific TCP/IP instance the
                application should bind to. If you are running in a multiple-stack environment in
                which each system has its own TCP/IP stack, the end-to-end server can be
                forced to use a specific stack, even if it runs on another system.

                A specific stack or stack affinity is defined in the Language Environment®
                variable: _BPXK_SETIBMOPT_TRANSPORT. To define environment variables
                in the end-to-end server, DD-name, STDENV should be added to the end-to-end
                server started task procedure. The STDENV DD-name can point to a sequential
                data set or a member in a partitioned dataset (for example, a member of the




134   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
end-to-end server PARMLIB) in which it is possible to define environment
variables to initialize Language Environment.

In this data set or member environment, variables can be specified as
VARNAM=value. See IBM Tivoli Workload Scheduler for z/OS Installation,
SC32-1264, for further information.

For example:
   //STDENV    DD    DISP=SHR,DSN=MY.FILE.PARM(STDENV)

This member can be used to set the stack affinity using the following environment
variable.
   _BPXK_SETIBMOPT_TRANSPORT=xxxxx

(xxxxx indicates the TCP/IP stack the end-to-end server should bind to.)

One disadvantage of stack affinity is that a particular stack on a specific z/OS
system is used. If this stack (the TCP/IP started task) or the z/OS system with
this stack has to be stopped or requires an IPL, the end-to-end server that can
run on another system will not be able to establish connections to the domain
managers at the first level. If this happens, manual interaction is required.

For more information, see the z/OS V1R2 Communications Server: IP
Configuration Guide, SC31-8775.

Use of Dynamic Virtual IP Addressing (DVIPA)
DVIPA, which was introduced with OS/390 V2R8, makes it possible to assign a
specific virtual IP address to a specific application. The configurations can be
created to have this virtual IP address independent of any specific TCP/IP stack
within the sysplex and dependent of the started application — that is, this IP
address will be the same for the application no matter which system in the
sysplex the application is started on.

Even if your application has to be moved to another system because of failure or
maintenance, the application can be reached under the same virtual IP address.
Use of DVIPA is the most flexible way to be prepared for application or system
failure.

We recommend that you plan for use of DVIPA for the following Tivoli Workload
Scheduler for z/OS components:
   Server started task used for end-to-end scheduling
   Server started task used for the JSC communication




           Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   135
The Tivoli Workload Scheduler for z/OS end-to-end (and JSC) server has been
                improved for Version 8.2. This improvement makes better use of DVIPA for the
                end-to-end (and JSC) server than in Tivoli Workload Scheduler 8.1.

                In IBM Tivoli Workload Scheduler for z/OS 8.1, a range of IP addresses to be
                used by DVIPA (VIPARANGE) had to be defined, as did specific PORT and IP
                addresses for the end-to-end server (Example 3-3).
                Example 3-3 Some required DVIPA definitions for Tivoli Workload Scheduler for z/OS 8.1
                VIPADYNAMIC
                viparange define 255.255.255.248 9.12.6.104
                ENDVIPADYNAMIC
                PORT
                5000 TCP TWSJSC BIND 9.12.6.106
                31182 TCP TWSCE2E BIND 9.12.6.107


                In this example, DVIPA automatically assigns started task TWSCE2E, which
                represents our end-to-end server task that is configured to use port 31182 and IP
                address 9.12.6.107.

                DVIPA is described in great detail in the z/OS V1R2 Communications Server: IP
                Configuration Guide, SC31-8775. In addition, the redbook TCP/IP in a Sysplex,
                SG24-5235, provides useful information for DVIPA.

                One major problem using DVIPA in the Tivoli Workload Scheduler for z/OS 8.1
                end-to-end server was that the end-to-end server mailman process still used the
                IP address for the z/OS system (the local IP address for outbound connections
                was determined by the routing table on the z/OS system). If localops nm
                ipvalidate was set to full in the first-level domain manager or backup domain
                manager, the outbound connection from the end-to-end mailman server process
                to the domain manager netman was rejected by the domain manager netman
                process. The result was that the outbound connection could not be established
                when the end-to-end server was moved from one system in the sysplex to
                another.

                This is changed in Tivoli Workload Scheduler for z/OS 8.2, so the end-to-end
                sever will use the host name or IP address that is specified in the TOPOLOGY
                HOSTNAME parameter both for inbound and outbound connections. This has
                the following advantages compared to Version 8.1:
                1. It is not necessary to define the end-to-end server started task in the static
                   DVIPA PORT definition. It is sufficient to define the DVIPA VIPARANGE
                   parameter.
                    When the end-to-end server starts and reads the TOPOLOGY HOSTNAME()
                    parameter, it performs a gethostbyname() on the host name. The host name
                    can be related to an IP address (in the VIPARANGE), for example in the USS


136   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
/etc/hosts file. It then will get the same IP address across z/OS systems in the
              sysplex.
              A major advantage also if the host name or IP address is going to be
              changed, it is sufficient to make the change in the /etc/hosts file. It is not
              necessary to change the TCP/IP definitions and restart the TCP/IP stack (as
              long as the new IP address is within the defined range of IP addresses in the
              VIPARANGE parameter).
           2. The host name in the TOPOLOGY HOSTNAME() parameter is used for
              outbound connections (from end-to-end server to the domain managers at the
              first level).
           3. You can use network address IP validation on the domain managers at the
              first level.

           The advantages of 1 and 2 also apply to the JSC server.

           Example 3-4 shows required DVIPA definitions for Tivoli Workload Scheduler 8.2
           in our environment.
           Example 3-4 Example of required DVIPA definitions for ITWS for z/OS 8.2
           VIPADYNAMIC
           viparange define 255.255.255.248 9.12.6.104
           ENDVIPADYNAMIC

           And the /etc/hosts file in USS looks like:

           9.12.6.107      twsce2e.itso.ibm.com twsce2e


            Note: In the previous example, we show use of the /etc/hosts file in USS. For
            DVIPA, it is advisable to use the DNS instead of the /etc/hosts file because the
            /etc/hosts definitions in general are defined locally on each machine (each
            z/OS image) in the sysplex.


3.4.7 Upgrading from Tivoli Workload Scheduler for z/OS 8.1
end-to-end scheduling
           If you are running Tivoli Workload Scheduler for z/OS 8.1 end-to-end scheduling
           and are going to update this environment to 8.2 level, you should plan for use of
           the new functions and possibilities in Tivoli Workload Scheduler for z/OS 8.2
           end-to-end scheduling.




                        Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   137
Be aware especially of the new possibilities introduced by:
                    Centralized script
                    Are you using non-centralized script in the Tivoli Workload Scheduler for z/OS
                    8.1 scheduling environment?
                    Will it be better or more efficient to use centralized scripts?
                    If centralized scripts are going to be used, you should plan for necessary
                    activities to have the non-centralized scripts consolidated in Tivoli Workload
                    Scheduler for z/OS controller JOBLIB.
                    JCL variables in centralized or non-centralized scripts or both
                    In Tivoli Workload Scheduler for z/OS 8.2 you can use Tivoli Workload
                    Scheduler for z/OS JCL variables in centralized and non-centralized scripts.
                    If you have implemented some locally developed workaround in Tivoli
                    Workload Scheduler for z/OS 8.1 to use JCL variables in the Tivoli Workload
                    Scheduler for z/OS non-centralized script, you should consider using the new
                    possibilities in Tivoli Workload Scheduler for z/OS 8.2.
                    Recovery possible for jobs with non-centralized and centralized scripts.
                    Will or can use of recovery in jobs with non-centralized or centralized scripts
                    improve your end-to-end scheduling? Is it something you should use in your
                    Tivoli Workload Scheduler for z/OS 8.2 environment?
                    Should the Tivoli Workload Scheduler for z/OS 8.1 job definitions be updated
                    or changed to use these new recovery possibilities?
                    Here again, some planning and considerations will be of great value.
                    New options and possibilities when defining fault-tolerant workstation jobs in
                    Tivoli Workload Scheduler for z/OS and working with fault-tolerant
                    workstations.
                    Tivoli Workload Scheduler for z/OS 8.2 introduces some new options in the
                    legacy ISPF dialog as well as in the JSC, when defining fault-tolerant jobs in
                    Tivoli Workload Scheduler for z/OS.
                    Furthermore, the legacy ISPF dialogs have changed and improved, and new
                    options have been added to work more easily with fault-tolerant workstations.

                Be prepared to educate your planners and operations so they know how to use
                these new options and functions!

                End-to-end scheduling is greatly improved in Version 8.2 of Tivoli Workload
                Scheduler for z/OS. Together with this improvement, several initialization
                statements have been changed. Furthermore, the network configuration for the
                end-to-end environment can be designed in another way in Tivoli Workload




138   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Scheduler for z/OS 8.2 because, for example, Tivoli Workload Scheduler for z/OS
           8.2 supports more than one first-level domain manager.

           To summarize: Expect to take some time to plan your upgrade from Tivoli
           Workload Scheduler for z/OS Version 8.1 end-to-end scheduling to Version 8.2
           end-to-end scheduling, because Tivoli Workload Scheduler for z/OS Version 8.2
           has been improved with many new functions and initialization parameters.

           Plan to have some time to investigate and read the new Tivoli Workload
           Scheduler for z/OS 8.2 documentation (remember to use the April 2004 Revised
           versions) to get a good understanding of the new end-to-end scheduling
           possibilities in Tivoli Workload Scheduler for z/OS Version 8.2 compared to V8.1

           Furthermore, plan time to test and verify the use of the new functions and
           possibilities in Tivoli Workload Scheduler for z/OS 8.2 end-to-end scheduling.



3.5 Planning for end-to-end scheduling with Tivoli
Workload Scheduler
           In this section, we discuss how to plan end-to-end scheduling for Tivoli Workload
           Scheduler. We show how to configure your environment to fit your requirements,
           and we point you to special considerations that apply to the end-to-end solution
           with Tivoli Workload Scheduler for z/OS.


3.5.1 Tivoli Workload Scheduler publications and documentation
           Hardcopy Tivoli Workload Scheduler documentation is not shipped with the
           product. The books are available in PDF format on the Tivoli Workload Scheduler
           8.2 product CD-ROM.

            Note: The publications are also available for download in PDF format at:
            http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html

            Look for books marked with “Revised April 2004,” as they have been updated
            with documentation changes that were introduced by service (fix pack) for
            Tivoli Workload Scheduler that was produced since the base version of the
            product was released in June 2003.




                      Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   139
3.5.2 Tivoli Workload Scheduler service updates (fix packs)
                Before installing Tivoli Workload Scheduler, it is important to check for the latest
                service (fix pack) for Tivoli Workload Scheduler. Service for Tivoli Workload
                Scheduler is released in packages that normally contain a full replacement of the
                Tivoli Workload Scheduler code. These packages are called fix packs and are
                numbered FixPack 01, FixPack 02, and so forth. New fix packs are usually
                released every three months. The base version of Tivoli Workload Scheduler
                must be installed before a fix pack can be installed.

                Check for the latest fix pack level and download it so that you can update your
                Tivoli Workload Scheduler installation and test the end-to-end scheduling
                environment on the latest fix pack level.

                  Tip: Fix packs for Tivoli Workload Scheduler can be downloaded from:
                  ftp://ftp.software.ibm.com

                  Log on with user ID anonymous and your e-mail address for the password. Fix
                  packs for Tivoli Workload Scheduler are in this directory:
                  /software/tivoli_support/patches/patches_8.2.0.

                At time of writing this book the latest fix pack for Tivoli Workload Scheduler was
                FixPack 04.

                When the fix pack is downloaded, installation guidelines can be found in the
                8.2.0-TWS-FP04.README file.

                  Note: FixPack 04 introduces a new Fault-Tolerant Switch Feature, which is
                  described in a PDF file named FaultTolerantSwitch.README.

                  The new Fault-Tolerant Switch Feature replaces and enhances the existing or
                  traditional Fault-Tolerant Switch Manager for backup domain managers.

                The Tivoli Workload Scheduler documentation has been updated to FixPack 03
                in the “April 2004 Revised” versions of the Tivoli Workload Scheduler manuals.
                As mentioned in 3.5.1, “Tivoli Workload Scheduler publications and
                documentation” on page 139, the latest versions of the Tivoli Workload
                Scheduler manuals can be downloaded from the IBM Web site.


3.5.3 System and software requirements
                System and software requirements for installing and running Tivoli Workload
                Scheduler is described in great detail in the IBM Tivoli Workload Scheduler
                Release Notes Version 8.2 (Maintenance Release April 2004), SC32-1277.


140   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
It is very important to consult and read this release-notes document before
           installing Tivoli Workload Scheduler because release notes contain system and
           software requirements, as well as the latest installation and upgrade notes.


3.5.4 Network planning and considerations
           Before you install Tivoli Workload Scheduler, be sure that you know about the
           various configuration examples. Each example has specific benefits and
           disadvantages. Here are some guidelines to help you to find the right choice:
              How large is your IBM Tivoli Workload Scheduler network? How many
              computers does it hold? How many applications and jobs does it run?
              The size of your network will help you decide whether to use a single domain
              or the multiple-domain architecture. If you have a small number of computers
              or a small number of applications to control with Tivoli Workload Scheduler,
              there may not be a need for multiple domains.
              How many geographic locations will be covered in your Tivoli Workload
              Scheduler network? How reliable and efficient is the communication between
              locations?
              This is one of the primary reasons for choosing a multiple-domain
              architecture. One domain for each geographical location is a common
              configuration. If you choose single domain architecture, you will be more
              reliant on the network to maintain continuous processing.
              Do you need centralized or decentralized management of Tivoli Workload
              Scheduler?
              A Tivoli Workload Scheduler network, with either a single domain or multiple
              domains, gives you the ability to manage Tivoli Workload Scheduler from a
              single node, the master domain manager. If you want to manage multiple
              locations separately, you can consider installing a separate Tivoli Workload
              Scheduler network at each location. Note that some degree of decentralized
              management is possible in a stand-alone Tivoli Workload Scheduler network
              by mounting or sharing file systems.
              Do you have multiple physical or logical entities at a single site? Are there
              different buildings with several floors in each building? Are there different
              departments or business functions? Are there different applications?
              These may be reasons for choosing a multi-domain configuration, such as a
              domain for each building, department, business function, or application
              (manufacturing, financial, engineering).




                      Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   141
Do you run applications, such as SAP R/3, that operate with Tivoli Workload
                    Scheduler?
                    If they are discrete and separate from other applications, you may choose to
                    put them in a separate Tivoli Workload Scheduler domain.
                    Would you like your Tivoli Workload Scheduler domains to mirror your
                    Windows NT domains? This is not required, but may be useful.
                    Do you want to isolate or differentiate a set of systems based on performance
                    or other criteria? This may provide another reason to define multiple Tivoli
                    Workload Scheduler domains to localize systems based on performance or
                    platform type.
                    How much network traffic do you have now? If your network traffic is
                    manageable, the need for multiple domains is less important.
                    Do your job dependencies cross system boundaries, geographical
                    boundaries, or application boundaries? For example, does the start of Job1
                    on workstation3 depend on the completion of Job2 running on workstation4?
                    The degree of interdependence between jobs is an important consideration
                    when laying out your Tivoli Workload Scheduler network. If you use multiple
                    domains, you should try to keep interdependent objects in the same domain.
                    This will decrease network traffic and take better advantage of the domain
                    architecture.
                    What level of fault tolerance do you require? An obvious disadvantage of the
                    single domain configuration is the reliance on a single domain manager. In a
                    multi-domain network, the loss of a single domain manager affects only the
                    agents in its domain.


3.5.5 Backup domain manager
                Each domain has a domain manager and, optionally, one or more backup
                domain managers. A backup domain manager (Figure 3-4 on page 143) must be
                in the same domain as the domain manager it is backing up. The backup domain
                managers must be fault-tolerant agents running the same product version of the
                domain manager they are supposed to replace, and must have the Resolve
                Dependencies and Full Status options enabled in their workstation definitions.

                If a domain manager fails during the production day, you can use either the Job
                Scheduling Console, or the switchmgr command in the console manager
                command line (conman), to switch to a backup domain manager. A switch
                manager action can be executed by anyone with start and stop access to the
                domain manager and backup domain manager workstations.




142   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
A switch manager operation stops the backup manager then restarts it as the
new domain manager and converts the old domain manager to a fault-tolerant
agent.

The identities of the current domain managers are documented in the Symphony
files on each FTA and remain in effect until a new Symphony file is received from
the master domain manager (OPCMASTER).



  MASTERDM

                                                         z/OS
                         Master Domain
                           Manager
                         OPCMASTER




  DomainA                                                                       DomainB
                               AIX                                               AIX
              Domain                                            Domain
              Manager                                           Manager
               FDMA                                              FDMB




    FTA1                                               FTA3
    BDM for                 FTA2                      BDM for                 FTA4
    DomainA                                           DomainB
               AIX                   OS/400                      AIX                   Solaris


Figure 3-4 Backup domain managers (BDM) within a end-to-end scheduling network

As mentioned in 2.3.5, “Making the end-to-end scheduling system fault tolerant”
on page 84, a switch to a backup domain manager remains in effect until a new
Symphony file is received from the master domain manager (OPCMASTER in
Figure 3-4). If the switch to the backup domain manager will be active across the
Tivoli Workload Scheduler for z/OS plan extension or replan, you must change
the topology definitions in the Tivoli Workload Scheduler for z/OS DOMREC
initialization statements. The backup domain manager fault-tolerant workstation
should be changed to the domain manager for the domain.

Example 3-5 shows how DOMREC for DomainA is changed so that the backup
domain manager FTA1 in Figure 3-4 is the new domain manager for DomainA.




              Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2       143
Because the change is also made in the DOMREC topology definition (in
                connection with the switch domain manager from FDMA to FTA1), FTA1 remains
                domain manager even if the Symphony file is recreated with Tivoli Workload
                Scheduler for z/OS plan extend or replan jobs.
                Example 3-5 Change in DOMREC for long-term switch to backup domain manager FTA1
                DOMREC DOMAIN(DOMAINA) DOMMGR(FDMA) DOMPARENT(OPCMASTER)

                Should be changed to:

                DOMREC DOMAIN(DOMAINA) DOMMGR(FTA1) DOMPARENT(OPCMASTER)

                Where FDMA is the name of the fault tolerant workstation that is domain manager
                before the switch.



3.5.6 Performance considerations
                Tivoli Workload Scheduler 8.1 introduced some important performance-related
                initialization parameters. These can be used to optimize or tune Tivoli Workload
                Scheduler networks. If you suffer from poor performance and have already
                isolated the bottleneck on the Tivoli Workload Scheduler side, you may want to
                take a closer look at the localopts parameters listed in Table 3-5 (default values
                shown in the table).
                Table 3-5 Performance-related localopts parameter
                  Syntax                                          Default value

                  mm cache mailbox=yes/no                         No

                  mm cache size = bytes                           32

                  sync level=low/medium/high                      High

                  wr enable compression=yes/no                    No

                These localopts parameters are described in detail in the following sections. For
                more information, check the IBM Tivoli Workload Scheduler Planning and
                Installation Guide, SC32-1273, and the redbook IBM Tivoli Workload Scheduler
                Version 8.2: New Features and Best Practices, SG24-6628.

                Mailman cache (mm cache mailbox and mm cache size)
                Tivoli Workload Scheduler can read groups of messages from a mailbox and put
                them into a memory cache. Access to disk through cache is extremely faster than
                accessing to disk directly. The advantage is even more relevant if you think that
                traditional mailman needs at least two disk accesses for every mailbox message.




144   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Important: mm cache mailbox parameter can be used on both UNIX and
 Windows workstations. This option is not applicable (has no effect) on USS.

A special mechanism ensures that messages that are considered essential are
not put into cache but are handled immediately. This avoids loss of vital
information in case of a mailman failure. The settings in the localopts file regulate
the behavior of mailman cache:
   mm cache mailbox
   The default is no. Specify yes to enable mailman to use a reading cache for
   incoming messages.
   mm cache size
   Specify this option only if you use the mm cache mailbox option. The default
   is 32 bytes, which should be a reasonable value for most small and
   medium-sized Tivoli Workload Scheduler installations. The maximum value is
   512, and higher values are ignored.

      Tip: If necessary, you can experiment with increasing this setting gradually
      for better performance. You can use values larger than 32 bytes for large
      networks, but in small networks do not set this value unnecessarily large,
      because this would reduce the available memory that could be allocated to
      other applications or other Tivoli Workload Scheduler processes.


File system synchronization level (sync level)
Sync level attribute specifies the frequency at which Tivoli Workload Scheduler
synchronizes messages held on disk with those in memory. There are three
possible settings:
Low              Lets the operating system handle the speed of write access. This
                 option speeds up all processes that use mailbox files. Disk usage
                 is notably reduced, but if the file system is reliable the data
                 integrity should be assured anyway.
Medium           Makes an update to the disk after a transaction has completed.
                 This setting could be a good trade-off between acceptable
                 performance and high security against loss of data. Write is
                 transaction-based; data written is always consistent.
High             (default setting) Makes an update every time data is entered.




            Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   145
Important considerations for the sync level usage:
                      For most UNIX systems (especially new UNIX systems with reliable disk
                      subsystems), a setting of low or medium is recommended.
                      In end-to-end scheduling, we recommend that you set this at low, because
                      host disk subsystems are considered as highly reliable systems.
                      This option is not applicable on Windows systems.
                      Regardless of the sync level value that you set in the localopts file, Tivoli
                      Workload Scheduler makes an update every time data is entered for
                      messages that are considered as essential (uses sync level=high for the
                      essential messages). Essential messages are considered the utmost
                      important by the Tivoli Workload Scheduler.


                Sinfonia file compression (wr enable compression)
                Starting with Tivoli Workload Scheduler 8.1, domain managers may distribute
                Sinfonia files to their FTAs in compressed form. Each Sinfonia record is
                compressed by mailman domain managers, then decompressed by writer FTA. A
                compressed Sinfonia record is about 7 times smaller in size. It can be particularly
                useful when a Symphony file is huge and network connection between two
                nodes is slow or not reliable (WAN). If any FTAs in the network have pre-8.1
                versions of Tivoli Workload Scheduler, Tivoli Workload Scheduler domain
                managers can send Sinfonia files to these workstations in uncompressed form.

                The following localopts setting is used to set compression in Tivoli Workload
                Scheduler:
                    wr enable compression=yes: This means that Sinfonia will be compressed.
                    The default is no.

                      Tip: Due to the overhead of compression and decompression, we
                      recommend that you use compression if Sinfonia is 4 MB or larger.


3.5.7 Fault-tolerant agent (FTA) naming conventions
                Each FTA represents a physical machine within a Tivoli Workload Scheduler
                network. Depending on the size of your distributed environment or network and
                how much it can grow in the future, it makes sense to think about naming
                conventions for your FTAs and eventually Tivoli Workload Scheduler domains. A
                good naming convention for FTAs and domains can help to identify an FTA easily
                in terms of where it is located or the business unit it belongs to. This becomes
                more important in end-to-end scheduling environments because the length of the
                workstation name for an FTA is limited in Tivoli Workload Scheduler for z/OS.



146   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Note: The name of any workstation in Tivoli Workload Scheduler for z/OS
 workstations for fault-tolerant agents included end-to-end is limited to four
 digits. The name must be alphanumeric, where the first digit must be
 alphabetical or national.

Figure 3-5 on page 147 shows a typical end-to-end network. It consists of two
domain managers at the first level, two backup domain managers, and some
FTAs.



  MASTERDM

                                                        z/OS
                        Master Domain
                          Manager
                        OPCMASTER




  Domain1                                                                       Domain2
                              AIX                                                AIX
            Domain                                             Domain
            Manager                                            Manager
             F100                                               F200




     F101                                              F201
    BDM for                 F102                     BDM for                  F202
    DomainA                                          DomainB
               AIX                  OS/400                      AIX                    Solaris

Figure 3-5 Example of naming convention for FTA workstations in end-to-end network

In Figure 3-5, we have illustrated one naming convention for the fault-tolerant
workstations in Tivoli Workload Scheduler for z/OS. The idea with this naming
convention is the following:
   First digit
   Character F is used to identify the workstation as an FTA. It will, for example,
   be possible to make lists in the legacy ISPF interface and in the JSC that
   shows all FTAs.




              Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2       147
Second digit
                    Character or number used to identify the domain for the workstation.
                    Third and fourth digits
                    Use to allow a high number of uniquely named servers or machines. The last
                    two digits are reserved for the numbering of each workstation.

                With this naming convention there will be room to define 1296 (that is, 36*36)
                fault-tolerant workstations for each domain named F1** to FZ**. If the domain
                manager fault-tolerant workstation for the first domain is named F100 (F000 is
                not used), it will be possible to define 35 domains with 1296 FTAs in each domain
                — that is, 45360 FTAs.

                This example is meant to give you an idea of the number of fault-tolerant
                workstations that can be defined, even using only four digits in the name.

                In the example, we did not change the first character in the workstation name: It
                was “fixed” at F. It is, of course, possible to use different characters here as well;
                for example, one could use D for domain managers and F for fault-tolerant
                agents. Changing the first digit in the workstation name increases the total
                number of fault-tolerant workstations that can be defined in Tivoli Workload
                Scheduler for z/OS and cannot cover all specific requirements. It demonstrates
                only that the naming needs careful consideration.

                Because a four-character name for the FTA workstation does not tell much about
                the server name or IP address for the server where the FTA is installed, another
                good naming convention for the FTA workstations is to have the server name (the
                DNS name or maybe the IP address) in the description field for the workstation in
                Tivoli Workload Scheduler for z/OS. The description field for workstations in
                Tivoli Workload Scheduler for z/OS allows up to 32 characters. This way, it is
                much easier to relate the four-character workstation name to a specific server in
                your distributed network.

                Example 3-6 shows how the description field can relate the four-character
                workstation name to the server name for the fault-tolerant workstations used in
                Figure 3-5 on page 147.

                  Tip: The host name in the workstation description field, in conjunction with the
                  four-character workstation name, provides an easy way to illustrate your
                  configured environment.

                Example 3-6 Workstation description field (copy of workstation list in the ISPF panel)
                Work station                                    T R Last update
                name description                                    user    date        time



148   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
F100   COPENHAGEN   -   AIX DM for Domain1     C   A   CCFBK    04/07/16    14.59
         F101   STOCKHOLM    -   AIX BDM for Domain1    C   A   CCFBK    04/07/16    15.00
         F102   OSLO         -   OS/400 LFTA in DM1     C   A   CCFBK    04/07/16    15.00
         F200   ROM          -   AIX DM for Domain2     C   A   CCFBK    04/07/16    15.02
         F201   MILANO       -   AIX BDM for Domain2    C   A   CCFBK    04/07/16    15.08
         F202   VENICE       -   SOLARIS FTA in DM2     C   A   CCFBK    04/07/16    15.17




3.6 Planning for the Job Scheduling Console
         In this section, we discuss planning considerations for the Tivoli Workload
         Scheduler Job Scheduling Console (JSC). The JSC is not a required component
         when running end-to-end scheduling with Tivoli Workload Scheduler. The JSC
         provides a unified GUI to different job-scheduling engines, Tivoli Workload
         Scheduler for z/OS controller, and Tivoli Workload Scheduler master domain
         manager, domain managers, and fault-tolerant agents.

         Job Scheduling Console 1.3 is the version that is delivered and used with Tivoli
         Workload Scheduler 8.2 and Tivoli Workload Scheduler for z/OS 8.2. The JSC
         code is shipped together with the Tivoli Workload Scheduler for z/OS or the Tivoli
         Workload Scheduler code.

         With the JSC, it is possible to work with different Tivoli Workload Scheduler for
         z/OS controllers (such as test and production) from one GUI. Also, from this
         same GUI, the user can at the same time work with Tivoli Workload Scheduler
         master domain managers or fault-tolerant agents.

         In end-to-end scheduling environments, the JSC can be a helpful tool when
         analyzing problems with the end-to-end scheduling network or for giving some
         dedicated users access to their own servers (fault-tolerant agents).

         The JSC is installed locally on your personal desktop, laptop, or workstation.

         Before you can run and use the JSC, the following additional components must
         be installed and configured:
            Tivoli Management Framework, V3.7.1 or V4.1
            Installed and configured in Tivoli Management Framework
            – Job Scheduling Services (JSS)
            – Tivoli Workload Scheduler connector
            – Tivoli Workload Scheduler for z/OS connector
            – JSC instances for Tivoli Workload Scheduler and Tivoli Workload
              Scheduler for z/OS environments



                    Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   149
Server started task on mainframe used for JSC communication
                    This server started task is necessary to communicate and work with Tivoli
                    Workload Scheduler for z/OS from the JSC.


3.6.1 Job Scheduling Console documentation
                The documentation for the Job Scheduling Console includes:
                    IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes
                    (Maintenance Release April 2004), SC32-1277
                    IBM Tivoli Workload Scheduler Job Scheduling Console Users Guide
                    (Maintenance Release April 2004), SC32-1257. This manual contains
                    information about how to:
                    – Install and update the JSC.
                    – Install and update JSS, Tivoli Workload Scheduler connector and Tivoli
                      Workload Scheduler for z/OS connector.
                    – Create Tivoli Workload Scheduler connector instances and Tivoli
                      Workload Scheduler for z/OS connector instances.
                    – Use the JSC to work with Tivoli Workload Scheduler.
                    – Use the JSC to work with Tivoli Workload Scheduler for z/OS.

                The documentation is not shipped in hardcopy form with the JSC code, but is
                available in PDF format on the JSC Version 1.3 CD-ROM.

                  Note: The publications are also available for download in PDF format at:
                  http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html

                  Here you can find the newest versions of the books. Look for books marked
                  with “Maintenance Release April 2004” because they have been updated with
                  documentation changes introduced after the base version of the product was
                  released in June 2003.


3.6.2 Job Scheduling Console service (fix packs)
                Before installing the JSC, it is important to check for and, if necessary, download
                latest service (fix pack) level. Service for the JSC is released in packages that
                normally contain a full replacement of it. These packages are called fix packs and
                are numbered FixPack 01, FixPack 02, and so forth. Usually, a new fix pack is
                released once every three months. The base version of the JSC must be
                installed before a fix pack can be installed.




150   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Tip: Fix packs for JSC can be downloaded from the IBM FTP site:
            ftp://ftp.software.ibm.com

            Log in with user ID anonymous and use your e-mail address for the password.
            Look for JSC fix packs in the /software/tivoli_support/patches/patches_1.3.0
            directory.

            Installation guidelines are in the 1.3.0-JSC-FP05.README text file.

           As this book is written, the latest fix pack for the JSC was FixPack 05. It is
           important to note that the JSC fix pack level should correspond to the connector
           FixPack level, that is apply the same fix pack level to the JSC and to the
           connector at the same time.

            Note: FixPack 05 improves performance for the JSC in two areas:
                  Response time improvements
                  Memory consumption improvements


3.6.3 Compatibility and migration considerations for the JSC
           The Job Scheduling Console feature level 1.3 can work with different versions of
           Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS.

           Before installing the Job Scheduling Console, consider Table 3-6 and Table 3-7
           on page 152, which summarize the supported interoperability combinations
           between the Job Scheduling Console, the connectors, and the Tivoli Workload
           Scheduler and Tivoli Workload Scheduler for z/OS engines.

           Table 3-6 shows the supported combinations of JSC, Tivoli Workload Scheduler
           connectors, and Tivoli Workload Scheduler engine (master domain manager,
           domain manager, or fault-tolerant agent).
           Table 3-6 Tivoli Workload Scheduler connector and engine combinations
            Job Scheduling Console           Connector           Tivoli Workload Scheduler engine

            1.3                              8.2                8.2

            1.3                              8.1                8.1

            1.2                              8.2                8.2


            Note: The engine can be a fault-tolerant agent, a domain manager, or a
            master domain manager.



                       Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   151
Table 3-7 shows the supported combinations of JSC, Tivoli Workload Scheduler
                for z/OS connectors, and Tivoli Workload Scheduler for z/OS engine (controller).
                Table 3-7 Tivoli Workload Scheduler for z/OS connector and engine combinations
                  Job Scheduling Console          Connector            IBM Tivoli Workload Scheduler for
                                                                       z/OS engine (controller)

                  1.3                             1.3                  8.2

                  1.3                             1.3                  8.1

                  1.3                             1.3                  2.3 (Tivoli OPC)

                  1.3                             1.2                  8.1

                  1.3                             1.2                  2.3 (Tivoli OPC)

                  1.2                             1.3                  8.2

                  1.2                             1.3                  8.1

                  1.2                             1.3                  2.3 (Tivoli OPC)


                  Note: If your environment comprises installations of updated and back-level
                  versions of the products, some functionalities might not work correctly. For
                  example, the new functionalities such as Secure Socket Layer (SSL) protocol
                  support, return code mapping, late job handling, extended task name and
                  recovery information for z/OS jobs are not supported by the Job Scheduling
                  Console feature level 1.2. A warning message is displayed if you try to open
                  an object created with the new functionalities, and the object is not opened.


                Satisfy the following requirements before installing
                The following software and hardware prerequisites and other considerations
                should be taken care of before installing the JSC.

                Software
                The following is required software:
                    Tivoli Management Framework Version 3.7.1 with FixPack 4 or higher
                    Tivoli Job Scheduling Services 1.2

                Hardware
                The following is required hardware:
                    CD-ROM drive
                    Approximately 200 MB free disk space for installation of the JSC
                    At least 256 MB RAM (preferably 512 MB RAM)




152   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Other
           The Job Scheduling Console can be installed on any workstation that has a
           TCP/IP connection. It can connect only to a server or workstation that has
           properly configured installations of the following products:
              Job Scheduling Services and IBM Tivoli Workload Scheduler for z/OS
              connector (mainframe-only scheduling solution)
              Job Scheduling Services and Tivoli Workload Scheduler connector
              (distributed-only scheduling solution)
              Job Scheduling Services, IBM Tivoli Workload Scheduler for z/OS connector,
              and Tivoli Workload Scheduler Connector (end-to-end scheduling solution)

           The latest and most up-to-date system and software requirements for installing
           and running the Job Scheduling Console are described in great detail in the IBM
           Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level
           1.3, SC32-1258 (remember to get the April 2004 revision).

           It is important to consult and read this release notes document before installing
           the JSC because release notes contain system and software requirements as
           well as the latest installation and upgrade notes.


3.6.4 Planning for Job Scheduling Console availability
           The legacy GUI gconman and gcomposer are no longer included with Tivoli
           Workload Scheduler, so the Job Scheduling Console fills the roles of those
           program as the primary interface to Tivoli Workload Scheduler. Staff that work
           only with the JSC and are not familiar with the command line interface (CLI)
           depend on continuous JSC availability. This requirement must be taken into
           consideration when planning for a Tivoli Workload Scheduler backup domain
           manager. We therefore recommend that there be a Tivoli Workload Scheduler
           connector instance on the Tivoli Workload Scheduler backup domain manager.
           This guarantees JSC access without interruption.

           Because the JSC communicates with Tivoli Workload Scheduler for z/OS, Tivoli
           Workload Scheduler domain managers, and Tivoli Workload Scheduler backup
           domain managers through one IBM Tivoli Management Framework (Figure 3-6
           on page 154), this framework can be a single point of failure. Consider
           establishing a backup Tivoli Management Framework or minimize the risk for
           outage in the framework by using (for example) clustering techniques.

           You can read more about how to make a Tivoli Management Framework fail-safe
           in the redbook High Availability Scenarios with IBM Tivoli Workload Scheduler
           and IBM Tivoli Framework, SG24-6632. Figure 3-6 on page 154 shows two
           domain managers at the first level directly connected to Tivoli Workload
           Scheduler for z/OS (OPC). In end-to-end scheduling environments it is, as


                      Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   153
mentioned earlier, advisable to plan and install connectors and prerequisite
                components (Tivoli Management Framework and Job Scheduling Services) on
                all first-level domain managers.


                   MASTERDM                                                   OPC
                                                           z/OS
                                                                      Databases
                                                     Master
                                                                                              JSC Server
                                                    Domain            Current Plan
                                                    Manager



                   DomainA                                                                                  DomainB

                          AIX            TWS                                   AIX           TWS
                     Domain                       TWS       OPC          Domain                         TWS       OPC
                     Manager                    Connector Connector      Manager                      Connector Connector
                      DMA            Symphony       Framework             DMA            Symphony          Framework




                         Other DMs and                                       Other DMs and
                             FTAs                                                FTAs




                                                         Job
                                                      Scheduling
                                                       Console


                Figure 3-6 JSC connections in an end-to-end environment


3.6.5 Planning for server started task for JSC communication
                To use the JSC to communicate with Tivoli Workload Scheduler for z/OS, it is
                necessary for the z/OS system to have a started task that handles IP
                communication with the JSC (more precisely, with the Tivoli Workload Scheduler
                for z/OS (OPC) Connector in the Tivoli Management Framework) (Figure 3-6).

                The same server started task can be used for JSC communication and for the
                end-to-end scheduling. We recommend having two server started tasks, one
                dedicated for end-to-end scheduling and one dedicated for JSC communication.
                With two server started tasks, the JSC server started task can be stopped and
                started without any impact on the end-to-end scheduling network.




154   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
The JSC server started task acts as the communication layer between the Tivoli
         Workload Scheduler for z/OS connector in the Tivoli Management Framework
         and the Tivoli Workload Scheduler for z/OS controller.



3.7 Planning for migration or upgrade from previous
versions
         If you are running end-to-end scheduling with Tivoli Workload Scheduler for z/OS
         Version 8.1 and Tivoli Workload Scheduler Version 8.1, you should plan how to
         do the upgrade or migration from Version 8.1 to Version 8.2. This is also the case
         if you are running an even older version, such as Tivoli OPC Version 2.3.0, Tivoli
         Workload Scheduler 7.0, or Maestro 6.1.

         Tivoli Workload Scheduler 8.2 supports backward compatibility so you can
         upgrade your network gradually, at different times, and in no particular order.

         You can upgrade top-down — that is, upgrade the Tivoli Workload Scheduler for
         z/OS controller (master) first, then the domain managers at the first level, then
         the subordinate domain managers and fault-tolerant agents — or upgrade
         bottom-up by starting with the fault-tolerant agents, then upgrading in sequence,
         leaving the Tivoli Workload Scheduler for z/OS controller (master) for last.

         However, if you upgrade the Tivoli Workload Scheduler for z/OS controller first,
         some new Version 8.2 functions, (firewall support, centralized script) will not work
         until the whole network is upgraded. During the upgrade procedure, the
         installation backs up all of the configuration information, installs the new product
         code, and automatically migrates old scheduling data and configuration
         information. However, it does not migrate user files or directories placed in the
         Tivoli Workload Scheduler for z/OS server work directory or in the Tivoli
         Workload Scheduler TWShome directory.

         Before doing the actual installation, you should decide on the migration or
         upgrade strategy that will be best in your end-to-end scheduling environment.
         This is also the case if you are upgrading from old Tivoli OPC tracker agents or if
         you decide to merge a stand-alone Tivoli Workload Scheduler environment with
         your Tivoli Workload Scheduler for z/OS environment to have a new end-to-end
         scheduling environment.

         Our experience is that installation and upgrading of an existing end-to-end
         scheduling environment takes some time, and the time required depends on the
         size of the environment. It is good to be prepared form the first day and to try to
         make some good and realistic implementation plans and schedules.




                    Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2   155
Another important thing to remember is that Tivoli Workload Scheduler
                end-to-end scheduling has been improved and has changed considerably from
                Version 8.1 to Version 8.2. If you are running Tivoli Workload Scheduler 8.1
                end-to-end scheduling and are planning to upgrade to Version 8.2 end-to-end
                scheduling, we recommend that you:
                1. First do a “one-to-one” upgrade from Tivoli Workload Scheduler 8.1
                   end-to-end scheduling to Tivoli Workload Scheduler 8.2 end-to-end
                   scheduling.
                2. When the upgrade is completed and you are running Tivoli Workload
                   Scheduler 8.2 end-to-end scheduling in the whole network, then start to
                   implement the new functions and facilities that were introduced in Tivoli
                   Workload Scheduler for z/OS 8.2 and Tivoli Workload Scheduler 8.2.



3.8 Planning for maintenance or upgrades
                The Tivoli maintenance strategy for Tivoli Workload Scheduler introduces a new
                way to maintain the product more effectively and easily. On a quarterly basis,
                Tivoli provides updates with recent patches and offers a fix pack that is similar to
                a maintenance release. This fix pack can be ordered either via the common
                support Web page
                ftp://ftp.software.ibm.com/software/tivoli_support/patches, or shipped on
                a CD. Ask your local Tivoli support for more details.

                In this book, we have recommended upgrading your end-to-end scheduling
                environment to FixPack 04 level. This level will change with time, of course, so
                when you start the installation you should plan to download and install the latest
                fix pack level.




156   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
4


    Chapter 4.    Installing IBM Tivoli
                  Workload Scheduler 8.2
                  end-to-end scheduling
                  When planning as described in the previous chapter is completed, it is time to
                  install the software (Tivoli Workload Scheduler for z/OS V8.2 and Tivoli Workload
                  Scheduler V8.2 and, optionally, Tivoli Workload Scheduler Job Scheduling
                  Console V1.3) and configure the installed software for end-to-end scheduling.

                  In this chapter, we provide details on how to install and configure Tivoli Workload
                  Scheduler end-to-end scheduling and Job Scheduling Console (JSC), including
                  how to perform the installation and the necessary steps involved.

                  We describe installation of:
                      IBM Tivoli Workload Scheduler for z/OS V8.2
                      IBM Tivoli Workload Scheduler V8.2
                      IBM Tivoli Workload Scheduler Job Scheduling Console V1.3
                      We also describe installation of the components that are required to run the
                      JSC.




© Copyright IBM Corp. 2004                                                                       157
4.1 Before the installation is started
                Before you start the installation, it is important to understand that Tivoli Workload
                Scheduler end-to-end scheduling involves two components:
                    IBM Tivoli Workload Scheduler for z/OS
                    IBM Tivoli Workload Scheduler

                The Tivoli Workload Scheduler Job Scheduling Console is not a required
                product, but our experience from working with the Tivoli Workload Scheduler
                end-to-end scheduling environment is that the JSC is a very nice and helpful tool
                to have for troubleshooting or for new users who do not know much about job
                scheduling, Tivoli Workload Scheduler, or Tivoli Workload Scheduler for z/OS.

                The overall installation and customization process is not complicated and can be
                narrowed down to the following steps:
                1. Design the topology (for example, domain hierarchy or number of domains)
                   for the distributed Tivoli Workload Scheduler network in which Tivoli Workload
                   Scheduler for z/OS will do the workload scheduling. Use the guidelines in
                   3.5.4, “Network planning and considerations” on page 141 when designing
                   the topology.
                2. Install and verify the Tivoli Workload Scheduler for z/OS controller and
                   end-to-end server tasks in the host environment.
                    Installation and verification of Tivoli Workload Scheduler for z/OS end-to-end
                    scheduling is described in 4.2, “Installing Tivoli Workload Scheduler for z/OS
                    end-to-end scheduling” on page 159.

                      Note: If you run on a previous release of IBM Tivoli Workload Scheduler for
                      z/OS (OPC), you should also migrate from this release to Tivoli Workload
                      Scheduler for z/OS 8.2 as part of the installation. Migration steps are
                      described in the Tivoli Workload Scheduler for z/OS Installation Guide,
                      SH19-4543. Migration is performed with a standard program supplied with
                      Tivoli Workload Scheduler for z/OS.

                3. Install and verify the Tivoli Workload Scheduler distributed workstations
                   (fault-tolerant agents).
                    Installation and verification of the Tivoli Workload Scheduler distributed
                    workstations is described in 4.3, “Installing Tivoli Workload Scheduler in an
                    end-to-end environment” on page 207.




158   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Important: These workstations can be installed and configured before the
             Tivoli Workload Scheduler for z/OS components, but it will not be possible
             to test the connections before the mainframe components are installed and
             ready.

         4. Define and activate fault-tolerant workstations (FTWs) in the Tivoli Workload
            Scheduler for z/OS controller:
            – Define FTWs in the Tivoli Workload Scheduler for z/OS database.
            – Activate the FTW definitions by running the plan extend or replan batch
              job.
            – Verify that the workstations are active and linked.
            This is described in 4.4, “Define, activate, verify fault-tolerant workstations” on
            page 211.
         5. Create fault-tolerant workstation jobs and job streams for the jobs to be
            executed on the FTWs, using either centralized script, non-centralized script,
            or a combination.
            This is described in 4.5, “Creating fault-tolerant workstation job definitions and
            job streams” on page 217.
         6. Do a verification test of the Tivoli Workload Scheduler for z/OS end-to-end
            scheduling. The verification test is used to verify that the Tivoli Workload
            Scheduler for z/OS controller can schedule and track jobs on the FTWs.
            The verification test should also confirm that it is possible to browse the job
            log for completed jobs run on the FTWs.
            This is described in 4.6, “Verification test of end-to-end scheduling” on
            page 235.

         If you would like to use the Job Scheduling Console to work with Tivoli Workload
         Scheduler for z/OS, Tivoli Workload Scheduler, or both, you should also activate
         support for the JSC. The necessary installation steps for activating support for
         the JSC are described in 4.7, “Activate support for the Tivoli Workload Scheduler
         Job Scheduling Console” on page 245.



4.2 Installing Tivoli Workload Scheduler for z/OS
end-to-end scheduling
         In this section, we guide you though the installation process of Tivoli Workload
         Scheduler for z/OS, especially the end-to-end feature. We do not duplicate the



                    Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   159
entire installation of the base product, which is described in the IBM Tivoli
                Workload Scheduler for z/OS Installation, SC32-1264.

                To activate support for end-to-end scheduling in Tivoli Workload Scheduler for
                z/OS to be able to schedule jobs on the Tivoli Workload Scheduler FTAs, follow
                these steps:
                1. Run EQQJOBS and specify Y for the end-to-end feature.
                    See 4.2.1, “Executing EQQJOBS installation aid” on page 162.
                2. Define controller (engine) and tracker (agent) subsystems in SYS1.PARMLIB.
                    See 4.2.2, “Defining Tivoli Workload Scheduler for z/OS subsystems” on
                    page 167.
                3. Allocate the end-to-end data sets running the EQQPCS06 sample generated
                   by EQQJOBS.
                    See 4.2.3, “Allocate end-to-end data sets” on page 168.
                4. Create and customize the work directory by running the EQQPCS05 sample
                   generated by EQQJOBS.
                    See 4.2.4, “Create and customize the work directory” on page 170.
                5. Create started task procedures for Tivoli Workload Scheduler for z/OS
                    See 4.2.5, “Create started task procedures for Tivoli Workload Scheduler for
                    z/OS” on page 173.
                6. Define workstation (CPU) configuration and domain organization by using the
                   CPUREC and DOMREC statements in a new PARMLIB member. (The default
                   member name is TPLGINFO.)
                    See 4.2.6, “Initialization statements for Tivoli Workload Scheduler for z/OS
                    end-to-end scheduling” on page 174, “DOMREC statement” on page 185,
                    “CPUREC statement” on page 187, and Figure 4-6 on page 176.
                7. Define Windows user IDs and passwords by using the USRREC statement in
                   a new PARMLIB member. (The default member name is USRINFO.)
                    It is important to remember that you have to define Windows user IDs and
                    passwords only if you have fault-tolerant agents on Windows-supported
                    platforms and want to schedule jobs to be run on these Windows platforms.
                    See “USRREC statement” on page 195.




160   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
8. Define the end-to-end configuration by using the TOPOLOGY statement in a
   new PARMLIB member. (The default member name is TPLGPARM.) The
   TOPOLOGY statement is described in “TOPOLOGY statement” on page 178.
   In the TOPOLOGY statement, you should specify the following:
   – For the TPLGYMEM keyword, write the name of the member used in step
     6. (See Figure 4-6 on page 176.)
   – For the USRMEM keyword, write the name of the member used in step 7
     on page 160. (See Figure 4-6 on page 176.)
9. Add the TPLGYSRV keyword to the OPCOPTS statement in the Tivoli
   Workload Scheduler for z/OS controller to specify the server name that will be
   used for end-to-end communication.
   See “OPCOPTS TPLGYSRV(server_name)” on page 176.
10.Add the TPLGYPRM keyword to the SERVOPTS statement in the Tivoli
   Workload Scheduler for z/OS end-to-end server to specify the member name
   used in step 8 on page 161. This step activates end-to-end communication in
   the end-to-end server started task.
   See “SERVOPTS TPLGYPRM(member name/TPLGPARM)” on page 177.
11.Add the TPLGYPRM keyword to the BATCHOPT statement to specify the
   member name used in step 8 on page 161. This step activates the end-to-end
   feature in the plan extend, plan replan, and Symphony renew batch jobs.
   See “TPLGYPRM(member name/TPLGPARM) in BATCHOPT” on page 177.
12.Optionally, you can customize the way the job name is generated in the
   Symphony file by the Tivoli Workload Scheduler for z/OS plan extend, replan,
   and Symphony renew batch jobs.
   The job name in the Symphony file can be tailored or customized by the
   JTOPTS TWSJOBNAME() parameter. See 4.2.9, “The JTOPTS
   TWSJOBNAME() parameter” on page 200 for more information.
   If you decide to customize the job name layout in the Symphony file, be aware
   that it can require that you reallocate the EQQTWSOU data set with larger
   record length. See “End-to-end input and output data sets” on page 168 for
   more information.

    Note: The JTOPTS TWSJOBNAME() parameter was introduced by APAR
    PQ77970.




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   161
13.Verify that the Tivoli Workload Scheduler for z/OS controller and server
                   started tasks can be started (or restarted if already running) and verify that
                   everything comes up correctly.
                    Verification is described in 4.2.10, “Verify end-to-end installation in Tivoli
                    Workload Scheduler for z/OS” on page 203.


4.2.1 Executing EQQJOBS installation aid
                EQQJOBS is a CLIST-driven ISPF dialog that can help you install Tivoli Workload
                Scheduler for z/OS. EQQJOBS assists in the installation of the engine and agent
                by building batch-job JCL that is tailored to your requirements. To make
                EQQJOBS executable, allocate these libraries to the DD statements in your TSO
                session:
                    SEQQCLIB to SYSPROC
                    SEQQPNL0 to ISPPLIB
                    SEQQSKL0 and SEQQSAMP to ISPSLIB

                Use EQQJOBS installation aid as follows:
                1. To invoke EQQJOBS, enter the TSO command EQQJOBS from an ISPF
                   environment. The primary panel shown in Figure 4-1 appears.


                  EQQJOBS0 ------------ EQQJOBS application menu --------------
                  Select option ===>


                     1   - Create sample job JCL


                     2   - Generate OPC batch-job skeletons


                     3   - Generate OPC Data Store samples


                     X   - Exit from the EQQJOBS dialog

                Figure 4-1 EQQJOBS primary panel

                    You only need to select options 1 and 2 for end-to-end specifications. We do
                    not want to step through the whole EQQJOBS dialog so, instead, we show
                    only the related end-to-end panels. (The referenced panel names are
                    indicated in the top-left corner of the panels, as shown in Figure 4-1.)
                2. Select option 1 in panel EQQJOBS0 (and press Enter twice), and make your
                   necessary input into panel ID EQQJOBS8. (See Figure 4-2 on page 163.)


162   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
EQQJOBS8---------------------------- Create sample job JCL --------------------
 Command ===>

   END TO END FEATURE:                  Y        (Y= Yes ,N= No)
    Installation Directory       ===>   /usr/lpp/TWS/V8R2M0_____________________
                                 ===>   ________________________________________
                                 ===>   ________________________________________
    Work Directory               ===>   /var/inst/TWS___________________________
                                 ===>   ________________________________________
                                 ===>   ________________________________________
    User for OPC address space   ===>   UID ___
    Refresh CP group             ===>   GID   __

   RESTART AND CLEANUP (DATA STORE)     N          (Y= Yes ,N= No)
    Reserved destination        ===>    OPC_____
    Connection type             ===>    SNA        (SNA/XCF)
    SNA Data Store luname       ===>    ________   (only for   SNA   connection   )
    SNA FN task luname          ===>    ________   (only for   SNA   connection   )
    Xcf Group                   ===>    ________   (only for   XCF   connection   )
    Xcf Data store member       ===>    ________   (only for   XCF   connection   )
    Xcf FL task member          ===>    ________   (only for   XCF   connection   )

 Press ENTER to create sample job JCL

Figure 4-2 Server-related input panel

   The following definitions are important:
   – END-TO-END FEATURE
       Specify Y if you want to install end-to-end scheduling and run jobs on Tivoli
       Workload Scheduler fault-tolerant agents.
   – Installation Directory
       Specify the (HFS) path where SMP/E has installed the Tivoli Workload
       Scheduler for z/OS files for UNIX system services that apply the
       End-to-End enabler feature. This directory is the one containing the bin
       directory. The default path is /usr/lpp/TWS/V8R2M0.
       The installation directory is created by SMP/E job EQQISMKD and
       populated by applying the end-to-end feature (JWSZ103).
       This should be mounted Read-Only on every system in your sysplex.
   – Work Directory
       Specify where the subsystem-specific files are. Replace with a name that
       uniquely identifies your subsystem. Each subsystem that will use the
       fault-tolerant workstations must have its own work directory. Only the
       server and the daily planning batch jobs update the work directory.




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   163
This directory is where the end-to-end processes have their working files
                        (Symphony, event files, traces). It should be mounted Read/Write on every
                        system in your sysplex.

                  Important: To configure end-to-end scheduling in a sysplex environment
                  successfully, make sure that the work directory is available to all systems in
                  the sysplex. This way, in case of a takeover situation, the new server will be
                  started on a new system in the sysplex, and the server must be able to access
                  the work directory to continue processing.

                  As described in Section 3.4.4, “Hierarchical File System (HFS) cluster” on
                  page 124, we recommend having dedicated HFS clusters for each end-to-end
                  scheduling environment (end-to-end server started task), that is:
                      One HFS cluster for the installation binaries per environment (test,
                      production, and so forth)
                      One HFS cluster for the work files per environment (test, production and so
                      forth)

                  The work HFS clusters should be mounted in Read/Write mode and the HFS
                  cluster with binaries should be mounted Read-Only. This is because the
                  working directory is application-specific and contains application-related data.
                  Besides, it makes your backup easier. The size of the cluster depends on the
                  size of the Symphony file and how long you want to keep the stdlist files. We
                  recommend that you allocate 2 GB of space.

                    – User for OPC address space
                        This information is used to create the EQQPCS05 sample job used to
                        build the directory with the right ownership. In order to run the end-to-end
                        feature correctly, the ownership of the work directory and the files
                        contained in it must be assigned to the same user ID that RACF
                        associates with the server started task. In the User for OPC address
                        space field, specify the RACF user ID used for the Server address space.
                        This is the name specified in the started-procedure table.
                    – Refresh CP group
                        This information is used to create the EQQPCS05 sample job used to
                        build the directory with the right ownership. In order to create the new
                        Symphony file, the user ID that is used to run the daily planning batch job
                        must belong to the group that you specify in this field. Make sure that the
                        user ID that is associated with the Server and Controller address spaces
                        (the one specified in the User for OPC address space field) belongs to this
                        group or has this group as a supplementary group.




164   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
As you can see in Figure 4-3 on page 165, we defined RACF user ID
                              TWSCE2E to the end-to-end server started task. User TWSCE2E belongs
                              to RACF group TWSGRP. Therefore, all users of the RACF group
                              TWSGRP and its supplementary group get access to create the
                              Symphony file and to modify and read other files in the work directory.

                                Tip: The Refresh CP group field can be used to give access to the HFS
                                file as well as to protect the HFS directory from unauthorized access.




   EQQJOBS8 ------------------- Create sample job JCL ------------
   EQQJOBS8 ------------------- Create sample job JCL ------------
   Command ===>
   Command ===>                                                              HFS Binary Directory
     end-to-end FEATURE:
     end-to-end FEATURE:                  Y
                                          Y        (Y= Yes , N= No)
                                                   (Y= Yes , N= No)
                                                                            Where the TWS binaries that run in USS were
      HFS Installation Directory ===>
      HFS Installation Directory ===>     /usr/lpp/TWS/V8R2M0______________ installed. E.g., translator, mailman, and
                                          /usr/lpp/TWS/V8R2M0______________
                                 ===>
                                 ===>     ___________________________
                                          ___________________________       batchman. This should be the same as the
                                 ===>
                                 ===>     ___________________________
                                          ___________________________       value of the TOPOLOGY BINDIR parameter.
      HFS Work Directory
      HFS Work Directory         ===>
                                 ===>     /var/inst/TWS_____________
                                          /var/inst/TWS_____________
                                 ===>
                                 ===>     ___________________________
                                          ___________________________       HFS Working Directory
                                 ===>
                                 ===>     ___________________________
                                          ___________________________       Where the TWS files that change throughout
      User for OPC Address Space ===>
      User for OPC Address Space ===>     E2ESERV_
                                          E2ESERV_
      Refresh CP Group           ===>     TWSGRP__
                                                                            the day will reside. E.g., Symphony, mailbox
      Refresh CP Group           ===>     TWSGRP__
                                                                             files, and logs for the TWS processes that run in
   ...
   ...                                                                       USS. This should be the same as the value of
                                                                             the TOPOLOGY WRKDIR parameter.

  EQQPCS05 sample JCL                                                        User for End-to-end Server Task
                                                                             The user associated with the end-to-end server
  //TWS      JOB ,'TWS INSTALL',CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1)
  /*JOBPARM SYSAFF=SC64                                                      started task.
  //JOBLIB DD DSN=TWS.V8R2M0.SEQQLMD0,DISP=SHR
  //ALLOHFS EXEC PGM=BPXBATCH,REGION=4M                                      Group for Batch Planning Jobs
  //STDOUT DD PATH='/tmp/eqqpcs05out',                                       The group containing all users who will run
  //            PATHOPTS=(OCREAT,OTRUNC,OWRONLY),PATHMODE=SIRWXU
  //STDIN DD PATH='/usr/lpp/TWS/V8R2M0/bin/config',                          batch planning jobs (CP extend, replan, refresh,
  //            PATHOPTS=(ORDONLY)                                           and Symphony renew).
  //STDENV DD *
  eqqBINDIR=/usr/lpp/TWS/V8R2M0
  eqqWRKDIR=/var/inst/TWS
  eqqUID=E2ESERV
  eqqGID=TWSGRP
  /*
  //*
  //OUTPUT1 EXEC PGM=IKJEFT01
  //STDOUT   DD SYSOUT=*,DCB=(RECFM=V,LRECL=256)
  //OUTPUT   DD PATH='/tmp/eqqpcs05out',
  //          PATHOPTS=ORDONLY
  //SYSTSPRT DD DUMMY
  //SYSTSIN DD *
    OCOPY INDD(OUTPUT) OUTDD(STDOUT)
    BPXBATCH SH rm /tmp/eqqpcs05out
  /*



Figure 4-3 Description of the input fields in the EQQJOBS8 panel

                     3. Press Enter to generate the installation job control language (JCL) jobs.
                        Table 4-1 lists the subset of the sample JCL members created by EQQJOBS
                        that relate to end-to-end scheduling.




                                    Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling                165
Table 4-1 Sample JCL members related to end-to-end scheduling (created by
                EQQJOBS)
                  Member                  Description

                  EQQCON                  Sample started task procedure for a Tivoli Workload Scheduler
                                          for z/OS controller and tracker in same address space.

                  EQQCONO                 Sample started task procedure for the Tivoli Workload
                                          Scheduler for z/OS controller only.

                  EQQCONP                 Sample initial parameters for a Tivoli Workload Scheduler for
                                          z/OS controller and tracker in same address space.

                  EQQCONOP                Sample initial parameters for a Tivoli Workload Scheduler for
                                          z/OS controller only.

                  EQQPCS05                Creates the working directory in HFS used by the end-to-end
                                          server task.

                  EQQPCS06                Allocates data sets necessary to run end-to-end
                                          schedulingend-to-end.

                  EQQSER                  Sample started task procedure for a server task.

                  EQQSERV                 Sample initialization parameters for a server task.

                4. EQQJOBS is also used to create batch-job skeletons. That is, skeletons for
                   the batch jobs (such as plan extend, replan, Symphony renew) that you can
                   submit from Tivoli Workload Scheduler for z/OS legacy ISPF panels. To
                   create batch-job skeletons, select option 2 in the EQQJOBS primary panel
                   (see Figure 4-1 on page 162). Make your necessary entries until panel
                   EQQJOBSA appears (Figure 4-4).




166   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
EQQJOBSA -------------- Generate OPC batch-job skeletons ----------------------
            Command ===>

             Specify if you want to use the following optional features:

              END TO END FEATURE:                      Y     (Y= Yes ,N= No)
             (To interoperate with TWS
              fault tolerant workstations)

              RESTART AND CLEAN UP (DATA STORE):       N     (Y= Yes ,N= No)
             (To be able to retrieve job log,
              execute dataset clean up actions
              and step restart)

              FORMATTED REPORT OF TRACKLOG EVENTS:   Y    (Y= Yes ,N= No)
                EQQTROUT dsname       ===> TWS.V8R20.*.TRACKLOG____________________________
                EQQAUDIT output dsn   ===> TWS.V8R20.*.EQQAUDIT.REPORT_____________________

            Press ENTER to generate OPC batch-job skeletons

           Figure 4-4 Generate end-to-end skeletons

           5. Specify Y for the END-TO-END FEATURE if you want to use end-to-end
              scheduling to run jobs on Tivoli Workload Scheduler fault-tolerant
              workstations.
           6. Press Enter and the skeleton members for daily plan extend, replan, trial plan,
              long-term plan extend, replan, and trial plan are created with data sets related
              to end-to-end scheduling. Also, a new member is created. (See Table 4-2 on
              page 167.)
           Table 4-2 End-to-end skeletons
            Member                  Description

            EQQSYRES                Tivoli Workload Scheduler Symphony renew


4.2.2 Defining Tivoli Workload Scheduler for z/OS subsystems
           The subsystem for the Tivoli Workload Scheduler for z/OS controllers (engines)
           and trackers on the z/OS images (agents) must be defined in the active
           subsystem-name-table member of SYS1.PARMLIB. It is advisable to install at
           least two Tivoli Workload Scheduler for z/OS controlling systems, one for testing
           and one for your production environment.

            Note: We recommend that you install the trackers (agents) and the Tivoli
            Workload Scheduler for z/OS controller (engine) in separate address spaces.




                      Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   167
To define the subsystems, update the active IEFSSNnn member in
                SYS1.PARMLIB. The name of the subsystem initialization module for Tivoli
                Workload Scheduler for z/OS is EQQINITF. Include records, as in the following
                example.
                Example 4-1 Subsystem definition record (IEFSSNnn member of SYS1.PARMLIB)
                SUBSYS SUBNAME(subsystem name)              /* TWS for z/OS subsystem */
                       INITRTN(EQQINITF)
                       INITPARM('maxecsa,F')


                Note that the subsystem name must be two to four characters: for example,
                TWSC for the controller subsystem and TWST for the tracker subsystems. Check
                the IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264, for more
                information.


4.2.3 Allocate end-to-end data sets
                Member EQQPCS06, created by EQQJOBS in your sample job JCL library,
                allocates the following VSAM and sequential data sets needed for end-to-end
                scheduling:
                    End-to-end script library (EQQSCLIB) for non-centralized script
                    End-to-end input and output events data sets (EQQTWSIN and
                    EQQTWSOU)
                    Current plan backup copy data set to create Symphony (EQQSCPDS)
                    End-to-end centralized script data library (EQQTWSCS)

                We explain the use and allocation of these data sets in more detail.

                End-to-end script library (EQQSCLIB)
                This script library data set includes members containing the commands or the
                job definitions for fault-tolerant workstations. It is required in the controller if you
                want to use the end-to-end scheduling feature. See Section 4.5.3, “Definition of
                non-centralized scripts” on page 221 for details about the JOBREC, RECOVERY,
                and VARSUB statements.

                  Tip: Do not compress members in this PDS. For example, do not use the ISPF
                  PACK ON command, because Tivoli Workload Scheduler for z/OS does not
                  use ISPF services to read it.


                End-to-end input and output data sets
                These data sets are required by every Tivoli Workload Scheduler for z/OS
                address space that uses the end-to-end feature. They record the descriptions of



168   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
related events with operations running on FTWs and are used by both the
end-to-end enabler task and the translator process in the scheduler’s server.

The data sets are device-dependent and can have only primary space allocation.
Do not allocate any secondary space. They are automatically formatted by Tivoli
Workload Scheduler for z/OS the first time they are used.

 Note An SD37 abend code is produced when Tivoli Workload Scheduler for
 z/OS formats a newly allocated data set. Ignore this error.

EQQTWSIN and EQQTWSOU are wrap-around data sets. In each data set, the
header record is used to track the amount of read and write records. To avoid the
loss of event records, a writer task will not write any new records until more
space is available when all existing records have been read.

The quantity of space that you need to define for each data set requires some
attention. Because the two data sets are also used for job log retrieval, the limit
for the job log length is half the maximum number of records that can be stored in
the input events data set. Two cylinders are sufficient for most installations.

The maximum length of the events logged in those two data sets, including the
job logs, is 120 bytes. Anyway, it is possible to allocate the data sets with a longer
logical record length. Using record lengths greater than 120 bytes does not
produce either advantages or problems. The maximum allowed value is 32000
bytes; greater values will cause the end-to-end server started task to terminate.
In both data sets there must be enough space for at least 1000 events. (The
maximum number of job log events is 500.) Use this as a reference if you plan to
define a record length greater than 120 bytes. So, when the record length of 120
bytes is used, the space allocation must be at least 1 cylinder. The data sets
must be unblocked and the block size must be the same as the logical record
length.

A minimum record length of 160 bytes is necessary for the EQQTWSOU data set
in order to be able to decide how to build the job name in the Symphony file.
(Refer to the TWSJOBNAME parameter in the JTOPTS statement in
Section 4.2.9, “The JTOPTS TWSJOBNAME() parameter” on page 200.)

For good performance, define the data sets on a device with plenty of availability.
If you run programs that use the RESERVE macro, try to allocate the data sets
on a device that is not, or only slightly, reserved.

Initially, you may need to test your system to get an idea of the number and types
of events that are created at your installation. After you have gathered enough
information, you can reallocate the data sets. Before you reallocate a data set,
ensure that the current plan is entirely up-to-date. You must also stop the


           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   169
end-to-end sender and receiver task on the controller and the translator thread
                on the server that add this data set.

                  Tip: Do not move these data sets after they have been allocated. They contain
                  device-dependent information and cannot be copied from one type of device
                  to another, or moved around on the same volume. An end-to-end event data
                  set that is moved will be re-initialized. This causes all events in the data set to
                  be lost. If you have DFHSM or a similar product installed, you should specify
                  that end-to-end event data sets are not migrated or moved.


                Current plan backup copy data set (EQQSCPDS)
                EQQSCPDS is the current plan backup copy data set that is used to create the
                Symphony file.

                During the creation of the current plan, the SCP data set is used as a CP backup
                copy for the production of the Symphony file. This VSAM is used when the
                end-to-end feature is active. It should be allocated with the same size as the
                CP1/CP2 and NCP VSAM data sets.

                End-to-end centralized script data set (EQQTWSCS)
                Tivoli Workload Scheduler for z/OS uses the end-to-end centralized script data
                set to temporarily store a script when it is downloaded from the JOBLIB data set
                to the agent for its submission.

                Set the following attributes for EQQTWSCS:
                    DSNTYPE=LIBRARY,
                    SPACE=(CYL,(1,1,10)),
                    DCB=(RECFM=FB,LRECL=80,BLKSIZE=3120)

                If you want to use centralized script support when scheduling end-to-end, use the
                EQQTWSCS DD statement in the controller and server started tasks. The data
                set must be a partitioned extended-data set.


4.2.4 Create and customize the work directory
                To install the end-to-end feature, you must allocate the files that the feature uses.
                Then, on every Tivoli Workload Scheduler for z/OS controller that will use this
                feature, run the EQQPCS05 sample to create the directories and files.

                The EQQPCS05 sample must be run by a user with one of the following
                permissions:
                    UNIX System Services (USS) user ID (UID) equal to 0
                    BPX.SUPERUSER FACILITY class profile in RACF


170   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
UID specified in the JCL in eqqUID and belonging to the group (GID)
                          specified in the JCL in eqqGID

                       If the GID or the UID has not been specified in EQQJOBS, you can specify them
                       in the STDENV DD before running the EQQPCS05.

                       The EQQPCS05 job runs a configuration script (named config) residing in the
                       installation directory. This configuration script creates a working directory with
                       the right permissions. It also creates several files and directories in this working
                       directory. (See Figure 4-5 on page 171.)


    z/OS
                                  EQQPCS05 must be run as:
                                    EQQPCS05 must be run as:
      Sample JCL for              • a user associated with USS UID 0; or
                                    • a user associated with USS UID 0; or
       installation of            • • a user with the BPX.SUPERUSER
                                    a user with the BPX.SUPERUSER
     End-to-end feature           facility in RACF; or
                                    facility in RACF; or
                                  • • the user that will be specified in eqqUID
                                    the user that will be specified in eqqUID
           EQQPCS05
           EQQPCS05               (the user associated with the end-to-end
                                    (the user associated with the end-to-end
                                  server started task)
                                    server started task)




    USS
    BINDIR                        WRKDIR

                                  Permissions           Owner       Group         Size   Date     Time    File Name__________
             config
             config               -rw-rw----        1   E2ESERV     TWSGRP         755   Feb 3    13:01   NetConf
                                  -rw-rw----        1   E2ESERV     TWSGRP        1122   Feb 3    13:01   TWSCCLog.properties
                                  -rw-rw----        1   E2ESERV     TWSGRP        2746   Feb 3    13:01   localopts
           configure
           configure              drwxrwx---        2   E2ESERV     TWSGRP        8192   Feb 3    13:01   mozart
                                  drwxrwx---        2   E2ESERV     TWSGRP        8192   Feb 3    13:01   pobox
                                  drwxrwxr-x        3   E2ESERV     TWSGRP        8192   Feb 11   09:48   stdlist

     The configure script creates subdirectories; copies configuration files; and sets
       The configure script creates subdirectories; copies configuration files; and sets
     the owner, group, and permissions of these directories and files. This last step
       the owner, group, and permissions of these directories and files. This last step
     is the reason EQQPCS05 must be run as a user with sufficient priviliges.
       is the reason EQQPCS05 must be run as a user with sufficient priviliges.



Figure 4-5 EQQPCS05 sample JCL and the configure script

                       After running EQQPCS05, you can find the following files in the work directory:
                          localopts
                          Defines the attributes of the local workstation (OPCMASTER) for batchman,
                          mailman, netman, and writer processes and for SSL. Only a subset of these
                          attributes is used by the end-to-end server on z/OS. Refer to IBM Tivoli
                          Workload Scheduler for z/OS Customization and Tuning, SC32-1265, for
                          information about customizing this file.




                                    Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling               171
mozart/globalopts
                    Defines the attributes of the IBM Tivoli Workload Scheduler network
                    (OPCMASTER ignores them).
                    Netconf
                    Netman configuration files
                    TWSCCLOG.properties
                    Defines attributes for trace function in the end-to-end server USS.

                You will also find the following directories in the work directory:
                    mozart
                    pobox
                    stdlist
                    stdlist/logs (contains the log files for USS processes)

                Do not touch or delete any of these files or directories, which are created in the
                work directory by the EQQPCS05 job, unless you are directed to do so, for
                example in error situations.

                  Tip: If you execute this job in a sysplex that cannot share the HFS (prior
                  OS/390 V2R9) and get messages like cannot create directory, you may
                  want a closer look on which machine the job really ran. Because without
                  system affinity, every member that has the initiater in the right class started
                  can execute the job so you must add a /*JOBPARM SYSAFF to make sure
                  that the job runs on the system where the work HFS is mounted.

                Note that the EQQPCS05 job does not define the physical HFS (or z/OS) data
                set. The EQQPCS05 initiates an existing HFS data set with the necessary files
                and directories for the end-to-end server started task.

                The physical HFS data set can be created with a job that contains an IEFBR14
                step, as shown in Example 4-2.
                Example 4-2 HFS data set creation
                //USERHFS EXEC PGM=IEFBR14
                //D1      DD DISP=(,CATLG),DSNTYPE=HFS,
                //           SPACE=(CYL,(prispace,secspace,1)),
                //           DSN=OMVS.TWS820.TWSCE2E.HFS


                Allocate the HFS work data set with enough space for your end-to-end server
                started task. In most installations, 2 GB disk space is enough.




172   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
4.2.5 Create started task procedures for Tivoli Workload Scheduler
for z/OS
           Perform this task for a Tivoli Workload Scheduler for z/OS tracker (agent),
           controller (engine), and server started task. You must define a started task
           procedure or batch job for each Tivoli Workload Scheduler for z/OS address
           space.

           The EQQJOBS dialog generates several members in the output sample library
           that you specified when running the EQQJOBS installation aid program. These
           members contain started task JCL that is tailored with the values you entered in
           the EQQJOBS dialog. Tailor these members further, according to the data sets
           you require. See Figure 4-1 on page 166.

           Because the end-to-end server started task uses TCP/IP communication, you
           should do the following:

           First, you have to modify the JCL of EQQSER in the following way:
              Make sure that the end-to-end server started task has access to the
              C runtime libraries, either as STEPLIB (include the CEE.SCEERUN in the
              STEPLIB concatenation) or by LINKLIST (the CEE.SCEERUN is in the
              LINKLIST concatenation).
              If you have multiple TCP/IP stacks, or if the name you used for the procedure
              that started up the TCPIP address space was not the default (TCPIP), change
              the end-to-end server started task procedure to include the SYSTCPD DD
              card to point to a data set containing the TCPIPJOBNAME parameter. The
              standard method to determine the connecting TCP/IP image is:
              – Connect to the TCP/IP specified by TCPIPJOBNAME in the active
                TCPIP.DATA.
              – Locate TCPIP.DATA using the SYSTCPD DD card.
              You can also use the end-to-end server TOPOLOGY TCPIPJOBNAME()
              parameter to specify the TCP/IP started task name that is used by the
              end-to-end server. This parameter can be used if you have multiple TCP/IP
              stacks or if the TCP/IP started task name is different form TCPIP.

           You must have a server started task to handle end-to-end scheduling. You can
           use the same server also to communicate with the Job Scheduling Console. In
           fact, the server can also handle APPC communication if configured to this.

           In Tivoli Workload Scheduler for z/OS 8.2, the type of communication that should
           be handled by the server started task is defined in the new SERVOPTS
           PROTOCOL() parameter.




                     Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   173
In the PROTOCOL() parameter, you can specify any combination of:
                    APPC: The server should handle APPC communication.
                    JSC: The server should handle JSC communication.
                    E2E: The server should handle end-to-end communication.

                  Recommendations: The Tivoli Workload Scheduler for z/OS controller and
                  end-to-end server use TCP/IP services. Therefore it is necessary to define a
                  USS segment for the controller and end-to-end server started task userids. No
                  special authorization necessary; it is only required to be defined in USS with
                  any user ID.

                  Even though it is possible to have one server started task handling end-to-end
                  scheduling, JSC communication, and even APPC communication as well, we
                  recommend having a server started task dedicated for end-to-end scheduling
                  (SERVOPTS PROTOCOL(E2E)). This has the advantage that you do not have
                  to stop the whole server processes if the JSC server must be restarted.

                  The server started task is important for handling JSC and end-to-end
                  communication. We recommend setting the end-to-end and JSC server
                  started tasks as non-swappable and giving it at least the same dispatching
                  priority as the Tivoli Workload Scheduler for z/OS controller (engine).

                The Tivoli Workload Scheduler for z/OS controller uses the end-to-end server to
                communicate events to the FTAs. The end-to-end server will start multiple tasks
                and processes using the UNIX System Services.


4.2.6 Initialization statements for Tivoli Workload Scheduler for z/OS
end-to-end scheduling
                Initialization statements for end-to-end scheduling fit into two categories:
                1. Statements used to configure the Tivoli Workload Scheduler for z/OS
                   controller (engine) and end-to-end server:
                    a. OPCOPTS and TPLGYPRM statements for the controller
                    b. SERVOPTS statement for the end-to-end server
                2. Statements used to define the end-to-end topology (the network topology for
                   the distributed Tivoli Workload Scheduler network).
                    The end-to-end topology statements fall into two categories:
                    a. Topology statements used to initialize the end-to-end server environment
                       in USS on the mainframe:
                        •   The TOPOLOGY statement



174   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
b. Statements used to describe the distributed Tivoli Workload Scheduler
      network and the responsibilities for the different Tivoli Workload Scheduler
      agents in this network:
       •   The DOMREC, CPUREC, and USRREC statements
   These statements are used by the end-to-end server and the plan extend,
   plan replan, and Symphony renew batch jobs. The batch jobs use the
   information when the Symphony file is created.
   See “Initialization statements used to describe the topology” on page 184.

We go through each initialization statement in detail and give you an example of
how a distributed Tivoli Workload Scheduler network can be reflected in Tivoli
Workload Scheduler for z/OS using the topology statements.
Table 4-3 Initialization members related to end-to-end scheduling
 Initialization member              Description

 TPLGYSRV                           Activates end-to-end in the Tivoli Workload Scheduler
                                    for z/OS controller.

 TPLGYPRM                           Activates end-to-end in the Tivoli Workload Scheduler
                                    for server and batch jobs (plan jobs).

 TOPOLOGY                           Specifies all the statements for end-to-end.

 DOMREC                             Defines domains in a distributed Tivoli Workload
                                    Scheduler network.

 CPUREC                             Defines agents in a Tivoli Workload Scheduler
                                    distributed network.

 USRREC                             Specifies user ID and password for NT users.

You can find more information in Tivoli Workload Scheduler for z/OS
Customization and Tuning, SH19-4544.

Figure 4-6 on page 176 illustrates the relationship between the initialization
statements and members related to end-to-end scheduling.




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   175
OPC Controller                         Note: It is possible to run many         Daily Planning Batch Jobs
  TWSC                                   servers, but only one server can be      (CPE, LTPE, etc.)
   OPCOPTS                               the end-to-end server (also called the   BATCHOPT
   TPLGYSRV(TWSCE2E)                     topology server). Specify this server    ...
                                         using the TPLGYSRV controller
   SERVERS(TWSCJSC,TWSCE2E)              option. The SERVERS option
                                                                                  TPLGYPRM(TPLGPARM)
   ...                                   specifies the servers that will be       ...
                                         started when the controller starts.

   JSC Server                          End-to-end Server
   TWSCJSC                             TWSCE2E
                                                                                   Topology Records
   SERVOPTS                             SERVOPTS
   SUBSYS(TWSCC)                        SUBSYS(TWSC)
                                                                                     EQQPARM(TPLGINFO)
   PROTOCOL(JSC)                        PROTOCOL(E2E)                              DOMREC     ...
   CODEPAGE(500)                        TPLGYPRM(TPLGPARM)                         DOMREC     ...
   JSCHOSTNAME(TWSCJSC)                 ...                                        CPUREC     ...
   PORTNUMBER(42581)                                                               CPUREC     ...
                                       Topology Parameters
   USERMAP(USERMAP)                                                                CPUREC     ...
   ...                                      EQQPARM(TPLGPARM)                      CPUREC     ...
  User Map                              TOPOLOGY                                   ...
                                          BINDIR(/tws)
      EQQPARM(USERMAP)                    WRKDIR(/tws/wrkdir)                        User Records
   USER 'ROOT@M-REGION'                   HOSTNAME(TWSC.IBM.COM)                       EQQPARM(USRINFO)
     RACFUSER(TMF)                        PORTNUMBER(31182)                          USRREC    ...
     RACFGROUP(TIVOLI)                    TPLGYMEM(TPLGINFO)                         USRREC    ...
   ...                                    USRMEM(USERINFO)                           USRREC    ...
                                          TRCDAYS(30)                                ...
                                          LOGLINES(100)

      If you plan to use Job Scheduling Console to work with OPC, it is a good idea to run two separate servers:
      one for JSC connections (JSCSERV), and another for the connection with the TWS network (E2ESERV).

Figure 4-6 Relationship between end-to-end initialization statements and members

                    In the following sections, we cover the different initialization statements and
                    members and describe their meaning and usage one by one. Refer to Figure 4-6
                    when reading these sections.

                    OPCOPTS TPLGYSRV(server_name)
                    Specify this keyword if you want to activate the end-to-end feature in the Tivoli
                    Workload Scheduler for z/OS (OPC) controller (engine).

                    Activates the end-to-end feature in the controller. If you specify this keyword, the
                    IBM Tivoli Workload Scheduler Enabler task is started. The specified
                    server_name is that of the end-to-end server that handles the events to and from
                    the FTAs. Only one server can handle events to and from the FTAs.

                    This keyword is defined in OPCOPTS.




176       End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Tip: If you want let the Tivoli Workload Scheduler for z/OS controller start and
 stop the end-to-end server, use the servers keyword in OPCOPTS parmlib
 member (see Figure 4-6 on page 176)


SERVOPTS TPLGYPRM(member name/TPLGPARM)
The SERVOPTS statement is the first statement read by the end-to-end server
started task. In the SERVOPTS, you specify different initialization options for the
server started task, such as:
   The name of the Tivoli Workload Scheduler for z/OS controller that the server
   should communicate with (serve). The name is specified with the SUBSYS()
   keyword.
   The type of protocol. The PROTOCOL() keyword is used to specify the type of
   communication used by the server.
   In Tivoli Workload Scheduler for z/OS 8.2, you can specify any combination of
   the following values separated by comma: E2E, JSC, APPC.

    Note: With Tivoli Workload Scheduler for z/OS 8.2, the TCPIP value has
    been replaced by the combination of the E2E and JSC values, but the
    TCPIP value is still allowed for backward compatibility.

   The TPLGYPRM() parameter is used to define the member name of the
   member in parmlib with the TOPOLOGY definitions for the distributed Tivoli
   Workload Scheduler network.
   The TPLGYPRM() parameter must be specified if PROTOCOL(E2E) is
   specified.

See Figure 4-6 on page 176 for an example of the required SERVOPTS
parameters for an end-to-end server (TWSCE2E in Figure 4-6 on page 176).

TPLGYPRM(member name/TPLGPARM) in BATCHOPT
It is important to remember to add the TPLGYPRM() parameter to the
BATCHOPT initialization statement that is used by the Tivoli Workload Scheduler
for z/OS planning jobs (trial plan extend, plan extend, plan replan) and
Symphony renew.

If the TPLGYPRM() parameter is not specified in the BATCHOP initialization
statement that is used by the plan jobs, no Symphony file will be created and no
jobs will run in the distributed Tivoli Workload Scheduler network.

See Figure 4-6 on page 176 for an example of how to specify the TPLGYPRM()
parameter in the BATCHOPT initialization statement.


           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   177
Note: The topology definitions in TPLGYPRM() in the BATCHOPT
                  initialization statement is read and verified by the trial plan extend job in Tivoli
                  Workload Scheduler for z/OS. This means that the trial plan extend job can be
                  used to verify the TOPLOGY definitions such as DOMREC, CPUREC, and
                  USRREC for syntax errors or logical errors before the plan extend or plan
                  replan job is executed.

                  Also note that the trial plan extend job does not create a new Symphony file
                  because it does not update the current plan in Tivoli Workload Scheduler for
                  z/OS.

                TOPOLOGY statement
                This statement includes all of the parameters that are related to the end-to-end
                feature. TOPOLOGY is defined in the member of the EQQPARM library as
                specified by the TPLGYPRM parameter in the BATCHOPT and SERVOPTS
                statements. Figure 4-8 on page 185 shows the syntax of the topology member.




178   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 4-7 The statements that can be specified in the topology member


Description of the topology statements
The topology parameters are described in the following sections.

BINDIR(directory name)
Specifies the name of the base file system (HFS or zOS) directory where
binaries, catalogs, and the other files are installed and shared among
subsystems.

The specified directory must be the same as the directory where the binaries are,
without the final bin. For example, if the binaries are installed in
/usr/lpp/TWS/V8R2M0/bin and the catalogs are in



           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   179
/usr/lpp/TWS/V8R2M0/catalog/C, the directory must be specified in the BINDIR
                keyword as follows: /usr/lpp/TWS/V8R2M0.

                CODEPAGE(host system codepage/IBM-037)
                Specifies the name of the host code page and applies to the end-to-end feature.
                The value is used by the input translator to convert data received from Tivoli
                Workload Scheduler domain managers at the first level from UTF-8 format to
                EBCIDIC format. You can provide the IBM – xxx value, where xxx is the EBCDIC
                code page. The default value, IBM – 037, defines the EBCDIC code page for US
                English, Portuguese, and Canadian French.

                For a complete list of available code pages, refer to Tivoli Workload Scheduler for
                z/OS Customization and Tuning, SH19-4544.

                ENABLELISTSECCHK(YES/NO)
                This security option controls the ability to list objects in the plan on an FTA using
                conman and the Job Scheduling Console. Put simply, this option determines
                whether conman and the Tivoli Workload Scheduler connector programs will
                check the Tivoli Workload Scheduler Security file before allowing the user to list
                objects in the plan.

                If set to YES, objects in the plan are shown to the user only if the user has been
                granted the list permission in the Security file. If set to NO, all users will be able to
                list objects in the plan on FTAs, regardless of whether list access is granted in the
                Security file. The default value is NO. Change the value to YES if you want to
                check for the list permission in the security file.

                GRANTLOGONASBATCH(YES/NO)
                This is only for jobs running on Windows platforms. If set to YES, the logon users
                for Windows jobs are automatically granted the right to log on as batch job. If set
                to NO or omitted, the right must be granted manually to each user or group. The
                right cannot be granted automatically for users running jobs on a backup domain
                controller, so you must grant those rights manually.

                HOSTNAME(host name /IP address/ local host name)
                Specifies the host name or the IP address used by the server in the end-to-end
                environment. The default is the host name returned by the operating system.

                If you change the value, you also must restart the Tivoli Workload Scheduler for
                z/OS server and renew the Symphony file.

                As described in Section 3.4.6, “TCP/IP considerations for end-to-end server in
                sysplex” on page 129, you can define a virtual IP address for each server of the
                active controller and the standby controllers. If you use a dynamic virtual IP
                address in a sysplex environment, when the active controller fails and the



180   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
standby controller takes over the communication, the FTAs automatically switch
the communication to the server of the standby controller.

To change the HOSTNAME of a server, perform the following actions:
1. Set the nm ipvalidate keyword to off in the localopts file on the first-level
   domain managers.
2. Change the HOSTNAME value of the server using the TOPOLOGY
   statement.
3. Restart the server with the new HOSTNAME value.
4. Renew the Symphony file.
5. If the renewal ends successfully, you can set the ipvalidate to full on the
   first-level domain managers.

See 3.4.6, “TCP/IP considerations for end-to-end server in sysplex” on page 129
for a description of how to define DVIPA IP address

LOGLINES(number of lines/100)
Specifies the maximum number of lines that the job log retriever returns for a
single job log. The default value is 100. In all cases, the job log retriever does not
return more than half of the number of records that exist in the input queue.

If the job log retriever does not return all of the job log lines because there are
more lines than the LOGLINES() number of lines, a notice similar to this appears
in the retrieved job log output:
   *** nnn lines have been discarded. Final part of Joblog ... ******

The line specifies the number (nnn) of job log lines not displayed, between the
first lines and the last lines of the job log.

NOPTIMEDEPENDENCY(YES/NO)
With this option, you can change the behavior of noped operations that are
defined on fault-tolerant workstations and have the centralized script option set to
N. In fact, Tivoli Workload Scheduler for z/OS completes the noped operations
without waiting for the time dependency resolution: With this option set to YES,
the operation can be completed in the current plan after the time dependency
has been resolved. The default value is NO.

 Note: This statement is introduced by APAR PQ84233.

PLANAUDITLEVEL(0/1)
Enables or disables plan auditing for FTAs. Each Tivoli Workload Scheduler
workstation maintains its own log. Valid values are 0 to disable plan auditing and



           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   181
1 to activate plan auditing. Auditing information is logged to a flat file in the
                TWShome/audit/plan directory. Only actions, not the success or failure of any
                action are logged in the auditing file. If you change the value, you must restart the
                Tivoli Workload Scheduler for z/OS server and renew the Symphony file.

                PORTNUMBER(port/31111)
                Defines the TCP/IP port number that is used by the server to communicate with
                the FTAs. This value has to be different from that specified in the SERVOPTS
                member. The default value is 31111, and accepted values are from 0 to 65535.

                If you change the value, you must restart the Tivoli Workload Scheduler for z/OS
                server and renew the Symphony file.

                  Important: The port number must be unique within a Tivoli Workload
                  Scheduler network.

                SSLLEVEL(ON/OFF/ENABLED/FORCE)
                Defines the type of SSL authentication for the end-to-end server (OPCMASTER
                workstation). It must have one of the following values:
                ON                   The server uses SSL authentication only if another workstation
                                     requires it.
                OFF                  (default value) The server does not support SSL authentication
                                     for its connections.
                ENABLED              The server uses SSL authentication only if another workstation
                                     requires it.
                FORCE                The server uses SSL authentication for all of its connections. It
                                     refuses any incoming connection if it is not SSL.

                If you change the value, you must restart the Tivoli Workload Scheduler for z/OS
                server and renew the Symphony file.

                SSLPORT(SSL port number/31113)
                Defines the port used to listen for incoming SSL connections on the server. It
                substitutes the value of nm SSL port in the localopts file, activating SSL support
                on the server. If SSLLEVEL is specified and SSLPORT is missing, 31113 is used
                as the default value. If SSLLEVEL is not specified, the default value of this
                parameter is 0 on the server, which indicates that no SSL authentication is
                required.

                If you change the value, you must restart the Tivoli Workload Scheduler for z/OS
                server and renew the Symphony file.




182   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
TCPIPJOBNAME(TCP/IP started-task name/TCPIP)
Specifies the TCP/IP started-task name used by the server. Set this keyword
when you have multiple TCP/IP stacks or a TCP/IP started task with a name
different from TCPIP. You can specify a name from one to eight alphanumeric or
national characters, where the first character is alphabetic or national.

TPLGYMEM(member name/TPLGINFO)
Specifies the PARMLIB member where the domain (DOMREC) and workstation
(CPUREC) definition specific to the end-to-end are. The default value is
TPLGINFO.

If you change the value, you must restart the Tivoli Workload Scheduler for z/OS
server and renew the Symphony file.

TRCDAYS(days/14)
Specifies the number of days the trace files and file in the stdlist directory are
kept before being deleted. Every day the USS code creates the new stdlist
directory to contain the logs for the day. All log directories that are older than the
number of days specified in TRCDAYS() are deleted automatically. The default
value is 14. Specify 0 if you do not want the trace files to be deleted.

 Recommendation: Monitor the size of your working directory (that is, the size
 of the HFS cluster with work files) to prevent the HFS cluster from becoming
 full. The trace files and files in the stdlist directory contain internal logging
 information and Tivoli Workload Scheduler messages that may be useful for
 troubleshooting. You should consider deleting them on a regular interval using
 the TRCDAYS() parameter.

USRMEM(member name/USRINFO)
Specifies the PARMLIB member where the user definitions are. This keyword is
optional except if you are going to schedule jobs on Windows operating systems,
in which case, it is required.

The default value is USRINFO.

If you change the value, you must restart the Tivoli Workload Scheduler for z/OS
server and renew the Symphony file.

WRKDIR(directory name)
Specifies the location of the working files for an end-to-end server started task.
Each Tivoli Workload Scheduler for z/OS end-to-end server must have its own
WRKDIR.




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   183
ENABLESWITCHFT(Y/N)
                New parameter (not shown in Figure 4-7 on page 179) that was introduced in
                FixPack 04 for Tivoli Workload Scheduler and APAR PQ81120 for Tivoli
                Workload Scheduler.

                It is used to activated the enhanced fault-tolerant mechanism on domain
                managers. The default is N, meaning that the enhanced fault-tolerant mechanism
                is not activated. For more information, check the documentation in the
                FaultTolerantSwitch.README.pdf file delivered with FixPack 04 for Tivoli
                Workload Scheduler.


4.2.7 Initialization statements used to describe the topology
                With the last three parameters listed in Table 4-3 on page 175, DOMREC,
                CPUREC, and USRREC, you define the topology of the distributed Tivoli
                Workload Scheduler network in Tivoli Workload Scheduler for z/OS. The defined
                topology is used by the plan extend, replan, and Symphony renew batch jobs
                when creating the Symphony file for the distributed Tivoli Workload Scheduler
                network.

                Figure 4-8 on page 185 shows how the distributed Tivoli Workload Scheduler
                topology is described using CPUREC and DOMREC initialization statements for
                the Tivoli Workload Scheduler for z/OS server and plan programs. The Tivoli
                Workload Scheduler for z/OS fault-tolerant workstations are mapped to physical
                Tivoli Workload Scheduler agents or workstations using the CPUREC statement.
                The DOMREC statement is used to describe the domain topology in the
                distributed Tivoli Workload Scheduler network.
                Note that the MASTERDM domain is predefined in Tivoli Workload Scheduler for
                z/OS. It is not necessary to specify a DOMREC parameter for the MASTERDM
                domain.

                Also note that the USRREC parameters are not depicted in Figure 4-8 on
                page 185.




184   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 4-8 The topology definitions for server and plan programs

                 In the following sections, we walk through the DOMREC, CPUREC, and
                 USRREC statements.

                 DOMREC statement
                 This statement begins a domain definition. You must specify one DOMREC for
                 each domain in the Tivoli Workload Scheduler network, with the exception of the
                 master domain.

                 The domain name used for the master domain is MASTERDM. The master
                 domain consists of the controller, which acts as the master domain manager. The
                 CPU name used for the master domain manager is OPCMASTER.

                 You must specify at least one domain, child of MASTERDM, where the domain
                 manager is a fault-tolerant agent. If you do not define this domain, Tivoli
                 Workload Scheduler for z/OS tries to find a domain definition that can function as
                 a child of the master domain.




                             Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   185
DOMRECs in topology member
  MASTERDM
                     OPCMASTER                        EQQPARM(TPLGINFO)
                                                    EQQSCLIB(MYJOB)
                                                DOMREC      DOMAIN(DOMAINA)
  DomainA                  DomainB
                                                            DOMMNGR(A000)
                                                            DOMPARENT(MASTERDM)
             A000             B000                                                        Symphony
                                                DOMREC      DOMAIN(DOMAINB)
                                                            DOMMNGR(B000)
                                                            DOMPARENT(MASTERDM)
   A001      A002   B001      B002              ...




 OPC doesn’t have a built-in
 place to store information about
 TWS domains. Domains and
 their relationships are defined in
 DOMRECs.

 There is no DOMREC for the
 Master Domain, MASTERDM.


      DOMRECs are used to add information about TWS domains to the Symphony file.
Figure 4-9 Example of two DOMREC statements for a network with two domains

                     DOMREC is defined in the member of the EQQPARM library that is specified by
                     the TPLGYMEM keyword in the TOPOLOGY statement (see Figure 4-6 on
                     page 176 and Figure 4-9).

                     Figure 4-10 illustrates the DOMREC syntax.




Figure 4-10 Syntax for the DOMREC statement

                     DOMAIN(domain name)
                     The name of the domain, consisting of up to 16 characters starting with a letter. It
                     can contain dashes and underscores.




186       End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
DOMMNGR(domain manager name)
The Tivoli Workload Scheduler workstation name of the domain manager. It must
be a fault-tolerant agent running in full status mode.

DOMPARENT(parent domain)
The name of the parent domain.

CPUREC statement
This statement begins a Tivoli Workload Scheduler workstation (CPU) definition.
You must specify one CPUREC for each workstation in the Tivoli Workload
Scheduler network, with the exception of the controller that acts as master
domain manager. You must provide a definition for each workstation of Tivoli
Workload Scheduler for z/OS that is defined in the database as a Tivoli Workload
Scheduler fault-tolerant workstation.

CPUREC is defined in the member of the EQQPARM library that is specified by
the TPLGYMEM keyword in the TOPOLOGY statement (see Figure 4-6 on
page 176 and Figure 4-11 on page 188).




          Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   187
Workstations     CPURECs in topology member
                                   in OPC                EQQPARM(TPLGINFO)
  MASTERDM                           A000                  EQQSCLIB(MYJOB)
                     OPCMASTER                   CPUREC      CPUNAME(A000)
                                     B000                    CPUOS(AIX)
                                     A001                    CPUNODE(stockholm)
  DomainA                  DomainB   A002                    CPUTCPIP(31281)          Symphony
             A000            B000
                                     B001                    CPUDOMAIN(DomainA)
                                     B002                    CPUTYPE(FTA)
                                     ...                     CPUAUTOLINK(ON)
                                                             CPUFULLSTAT(ON)
   A001      A002   B001     B002
                                                             CPURESDEP(ON)
                                                             CPULIMIT(20)
                                                             CPUTZ(ECT)
                                                             CPUUSER(root)
 OPC does not have fields to                     CPUREC      CPUNAME(A001)
 contain the extra information in a                          CPUOS(WNT)
 TWS workstation definition; OPC                             CPUNODE(copenhagen)
                                                             CPUDOMAIN(DOMAINA)       Valid CPUOS
 workstations marked fault tolerant                          CPUTYPE(FTA)             values:
 must also have a CPUREC. The                                CPUAUTOLINK(ON)          AIX
 workstation name in OPC acts as                             CPULIMIT(10)             HPUX
                                                             CPUTZ(ECT)               POSIX
 a pointer to the CPUREC.                                    CPUUSER(Administrator)   UNIX
 There is no CPUREC for the                                  FIREWALL(Y)              WNT
 Master Domain manager,                                      SSLLEVEL(FORCE)          OTHER
                                                             SSLPORT(31281)
 OPCMASTER.
                                                 ...
          CPURECs are used to add information about DMs & FTAs to the Symphony file.
Figure 4-11 Example of two CPUREC statements for two workstations

                    Figure 4-12 on page 189 illustrates the CPUREC syntax.




188       End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 4-12 Syntax for the CPUREC statement

CPUNAME(cpu name)
The name of the Tivoli Workload Scheduler workstation, consisting of up to four
alphanumerical characters, starting with a letter.



          Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   189
CPUOS(operating system)
                The host CPU operating system related to the Tivoli Workload Scheduler
                workstation. The valid entries are AIX, HPUX, POSIX, UNIX, WNT, and OTHER.

                CPUNODE(node name)
                The node name or the IP address of the CPU. Fully-qualified domain names up
                to 52 characters are accepted.

                CPUTCPIP(port number/ 31111)
                The TCP port number of netman on this CPU. It comprises five numbers and, if
                omitted, uses the default value, 31111.

                CPUDOMAIN(domain name)
                The name of the Tivoli Workload Scheduler domain of the CPU.

                CPUHOST(cpu name)
                The name of the host CPU of the agent. It is required for standard and extended
                agents. The host is the Tivoli Workload Scheduler CPU with which the standard
                or extended agent communicates and where its access method resides.

                  Note: The host cannot be another standard or extended agent.

                CPUACCESS(access method)
                The name of the access method. It is valid for extended agents and must be the
                name of a file that resides in the Tivoli Workload Scheduler <home>/methods
                directory on the host CPU of the agent.

                CPUTYPE(SAGENT/ XAGENT/ FTA)
                The CPU type specified as one of the following:
                FTA                        (default) Fault-tolerant agent, including domain managers
                                           and backup domain managers.
                SAGENT                     Standard agent
                XAGENT                     Extended agent

                  Note: If the extended-agent workstation is manually set to Link, Unlink, Active,
                  or Offline, the command is sent to its host CPU.

                CPUAUTOLNK(OFF/ON)
                Autolink is most effective during the initial start-up sequence of each plan. Then a
                new Symphony file is created and all workstations are stopped and restarted.




190   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
For a fault-tolerant agent or standard agent, specify ON so that, when the domain
manager starts, it sends the new production control file (Symphony) to start the
agent and open communication with it.

For the domain manager, specify On so that when the agents start they open
communication with the domain manager.

Specify OFF to initialize an agent when you submit a link command manually
from the Tivoli Workload Scheduler for z/OS Modify Current Plan ISPF dialogs or
from the Job Scheduling Console.

 Note: If the X-agent workstation is manually set to Link, Unlink, Active, or
 Offline, the command is sent to its host CPU.

CPUFULLSTAT(ON/OFF)
This applies only to fault-tolerant agents. If you specify OFF for a domain
manager, the value is forced to ON.

Specify ON for the link from the domain manager to operate in Full Status mode.
In this mode, the agent is kept updated about the status of jobs and job streams
that are running on other workstations in the network.

Specify OFF for the agent to receive status information only about the jobs and
schedules on other workstations that affect its own jobs and schedules. This can
improve the performance by reducing network traffic.

To keep the production control file for an agent at the same level of detail as its
domain manager, set CPUFULLSTAT and CPURESDEP (see below) to ON.
Always set these modes to ON for backup domain managers.

You should also be aware of the new TOPOLOGY ENABLESWITCHFT()
parameter described in “ENABLESWITCHFT(Y/N)” on page 184.

CPURESDEP(ON/OFF)
This applies only to fault-tolerant agents. If you specify OFF for a domain
manager, the value is forced to ON.

Specify ON to have the agent’s production control process operate in Resolve All
Dependencies mode. In this mode, the agent tracks dependencies for all of its
jobs and schedules, including those running on other CPUs.

 Note: CPUFULLSTAT must also be ON so that the agent is informed about
 the activity on other workstations.




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   191
Specify OFF if you want the agent to track dependencies only for its own jobs
                and schedules. This reduces CPU usage by limiting processing overhead.

                To keep the production control file for an agent at the same level of detail as its
                domain manager, set CPUFULLSTAT and CPURESDEP to ON. Always set these
                modes to ON for backup domain managers.

                You should also be aware of the new TOPOLOGY ENABLESWITCHFT()
                parameter that is described in “ENABLESWITCHFT(Y/N)” on page 184.

                CPUSERVER(server ID)
                This applies only to fault-tolerant and standard agents. Omit this option for
                domain managers.

                This keyword can be a letter or a number (A-Z or 0-9) and identifies a server
                (mailman) process on the domain manager that sends messages to the agent.
                The IDs are unique to each domain manager, so you can use the same IDs for
                agents in different domains without conflict. If more than 36 server IDs are
                required in a domain, consider dividing it into two or more domains.

                If a server ID is not specified, messages to a fault-tolerant or standard agent are
                handled by a single mailman process on the domain manager. Entering a server
                ID causes the domain manager to create an additional mailman process. The
                same server ID can be used for multiple agents. The use of servers reduces the
                time that is required to initialize agents and generally improves the timeliness of
                messages.

                  Notes on multiple mailman processes:
                      When setting up multiple mailman processes, do not forget that each
                      mailman server process uses extra CPU resources on the workstation on
                      which it is created, so be careful not to create excessive mailman
                      processes on low-end domain managers. In most of the cases, using extra
                      domain managers is a better choice than configuring extra mailman
                      processes.
                      Cases in which use of extra mailman processes might be beneficial
                      include:
                      – Important FTAs that run mission critical jobs.
                      – Slow-initializing FTAs that are at the other end of a slow link. (If you
                        have more than a couple of workstations over a slow link connection to
                        the OPCMASTER, a better idea is to place a remote domain manager
                        to serve these workstations.)
                      If you have unstable workstations in the network, do not put them under the
                      same mailman server ID with your critical servers.


192   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
See Figure 4-13 for an example of CPUSERVER() use. The figure shows that
one mailman process on domain manager FDMA has to handle all outbound
communication with the five FTAs (FTA1 to FTA5) if these workstations (CPUs)
are defined without the CPUSERVER() parameter. If FTA1 and FTA2 are defined
with CPUSERVER(A), and FTA3 and FTA4 are defined with CPUSERVER(1),
the domain manager FDMA will start two new mailman processes for these two
server IDs (A and 1).


                                                                           parent domain
                                                                             manager



 • No Server IDs                       DomainA                                      AIX
                                                                   Domain
 The main mailman                                                  Manager
                                                                    FDMA

 process on DMA                                                              mailman
                                                                              mailman

 handles all outbound
 communications with
 the FTAs in the domain.                 FTA1             FTA2             FTA3               FTA4                 FTA5

                                              Linux            Solaris        Windows 2000         HPUX                   OS/400
                                           No Server ID     No Server ID       No Server ID      No Server ID        No Server ID




                                                                           parent domain
                                                                             manager


                                       DomainA                                      AIX
 • 2 Different Server IDs                                          Domain
                                                                   Manager
 An extra mailman                                                   FDMA


 process is spawned for                                     SERVERA
                                                             SERVERA
                                                             mailman
                                                              mailman
                                                                            SERVER1
                                                                             SERVER1
                                                                             mailman
                                                                              mailman
                                                                                               mailman
                                                                                                mailman


 each server ID in the
 domain.                                 FTA1             FTA2             FTA3               FTA4                 FTA5

                                              Linux            Solaris        Windows 2000            HPUX                OS/400
                                           Server ID A       Server ID A       Server ID 1           Server ID 1     No Server ID



Figure 4-13 Usage of CPUSERVER() IDs to start extra mailman processes

CPULIMIT(value/1024)
Specifies the number of jobs that can run at the same time in a CPU. The default
value is 1024. The accepted values are integers from 0 to 1024. If you specify 0,
no jobs are launched on the workstation.

CPUTZ(timezone/UTC)
Specifies the local time zone of the FTA. It must match the time zone of the
operating system in which the FTA runs. For a complete list of valid time zones,
refer to the appendix of the IBM Tivoli Workload Scheduler Reference Guide,
SC32-1274.




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling                                       193
If the time zone does not match that of the agent, the message AWSBHT128I is
                displayed in the log file of the FTA. The default is UTC (universal coordinated
                time).

                To avoid inconsistency between the local date and time of the jobs and of the
                Symphony creation, use the CPUTZ keyword to set the local time zone of the
                fault-tolerant workstation. If the Symphony creation date is later than the current
                local date of the FTW, Symphony is not processed.

                In the end-to-end environment, time zones are disabled by default when installing
                or upgrading Tivoli Workload Scheduler for z/OS. If the CPUTZ keyword is not
                specified, time zones are disabled. For additional information about how to set
                the time zone in an end-to-end network, see the IBM Tivoli Workload Scheduler
                Planning and Installation Guide, SC32-1273.

                CPUUSER(default user/tws)
                Specifies the default user for the workstation. The maximum length is 47
                characters. The default value is tws.

                The value of this option is used only if you have not defined the user in the
                JOBUSR option of the SCRPTLIB JOBREC statement or supply it with the Tivoli
                Workload Scheduler for z/OS job submit exit EQQUX001 for centralized script.

                SSLLEVEL(ON/OFF/ENABLED/FORCE)
                Must have one of the following values:
                ON                  The workstation uses SSL authentication when it connects with
                                    its domain manager. The domain manager uses the SSL
                                    authentication when it connects with a domain manager of a
                                    parent domain. However, it refuses any incoming connection
                                    from its domain manager if the connection does not use the SSL
                                    authentication.
                OFF                 (default) The workstation does not support SSL authentication
                                    for its connections.
                ENABLED             The workstation uses SSL authentication only if another
                                    workstation requires it.
                FORCE               The workstation uses SSL authentication for all of its
                                    connections. It refuses any incoming connection if it is not SSL.

                If this attribute is set to OFF or omitted, the workstation is not intended to be
                configured for SSL. In this case, any value for SSLPORT (see below) will be
                ignored. You should also set the nm ssl port local option to 0 (in the localopts file)
                to be sure that this port is not opened by netman.




194   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
SSLPORT(SSL port number|/31113)
Defines the port used to listen for incoming SSL connections. This value must
match the one defined in the nm SSL port local option (in the localopts file) of the
workstation (the server with Tivoli Workload Scheduler installed). It must be
different from the nm port local option (in the localopts file) that defines the port
used for normal communications. If SSLLEVEL is specified but SSLPORT is
missing, 31113 is used as the default value. If not even SSLLEVEL is specified,
the default value of this parameter is 0 on FTWs, which indicates that no SSL
authentication is required.

FIREWALL(YES/NO)
Specifies whether the communication between a workstation and its domain
manager must cross a firewall. If you set the FIREWALL keyword for a
workstation to YES, it means that a firewall exists between that particular
workstation and its domain manager, and that the link between the domain
manager and the workstation (which can be another domain manager itself) is
the only link that is allowed between the respective domains. Also, for all
workstations having this option set to YES, the commands to start (start
workstation) or stop (stop workstation) the workstation or to get the standard list
(showjobs) are transmitted through the domain hierarchy instead of opening a
direct connection between the master (or domain manager) and the workstation.
The default value for FIREWALL is NO, meaning that there is no firewall
boundary between the workstation and its domain manager.

To specify that an extended agent is behind a firewall, set the FIREWALL
keyword for the host workstation. The host workstation is the Tivoli Workload
Scheduler workstation with which the extended agent communicates and where
its access method resides.

USRREC statement
This statement defines the passwords for the users who need to schedule jobs to
run on Windows workstations.

USRREC is defined in the member of the EQQPARM library as specified by the
USERMEM keyword in the TOPOLOGY statement. (See Figure 4-6 on page 176
and Figure 4-15 on page 197.)

Figure 4-14 illustrates the USRREC syntax.




Figure 4-14 Syntax for the USRREC statement




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   195
USRCPU(cpuname)
                The Tivoli Workload Scheduler workstation on which the user can launch jobs. It
                consists of four alphanumerical characters, starting with a letter. It is valid only on
                Windows workstations.

                USRNAM(logon ID)
                The user name of a Windows workstation. It can include a domain name and can
                consist of 47 characters.

                Windows user names are case-sensitive. The user must be able to log on to the
                computer on which Tivoli Workload Scheduler has launched jobs, and must also
                be authorized to log on as batch.

                If the user name is not unique in Windows, it is considered to be either a local
                user, a domain user, or a trusted domain user, in that order.

                USRPWD(password)
                The user password for the user of a Windows workstation (Figure 4-15 on
                page 197). It can consist of up to 31 characters and must be enclosed in single
                quotation marks. Do not specify this keyword if the user does not need a
                password. You can change the password every time you create a Symphony file
                (when creating a CP extension).

                  Attention: The password is not encrypted. You must take the necessary
                  action to protect the password from unauthorized access.

                  One way to do this is to place the USRREC definitions in a separate member
                  in a separate library. This library should then be protected with RACF so it can
                  be accessed only by authorized persons. The library should be added in the
                  EQQPARM data set concatenation in the end-to-end server started task and
                  in the plan extend, replan, and Symphony renew batch jobs.

                  Example JCL for plan replan, extend, and Symphony renew batch jobs:
                      //EQQPARM    DD DISP=SHR,DSN=TWS.V8R20.PARMLIB(BATCHOPT)
                      //           DD DISP=SHR,DSN=TWS.V8R20.PARMUSR

                  In this example, the USRREC member is placed in the
                  TWS.V8R20.PARMUSR library. This library can then be protected with RACF
                  according to your standards. All other BATCHOPT initialization statements are
                  placed in the usual parameter library. In the example, this library is named
                  TWS.V8R20.PARMLIB and the member is BATCHOPT.




196   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
USRRECs in user member
  MASTERDM
                         OPCMASTER                             EQQPARM(USERINFO)
                                                         USRREC
  DomainA                       DomainB                           USRCPU(F202)
                                                                  USERNAM(tws)
              A000                 B000
                                                                  USRPSW(tivoli00)                           Symphony
                                                         USRREC
                                                                  USRCPU(F202)
  A001        A002      B001       B002                           USERNAM(Jim Smith)
               tws             SouthMUser1
                                                                  USRPSW(ibm9876)
            Jim Smith                                    USRREC
                                                                  USRCPU(F302)
 OPC doesn’t have a built-                                        USERNAM(SouthMUser1)
 in way to store Windows                                          USRPSW(d9fj4k)
                                                         ...
 users and passwords; for
 this reason, the users are
 defined by adding
 USRRECs to the user
 member of EQQPARM.




            USRRECs are used to add Windows NT user definitions to the Symphony file.
Figure 4-15 Example of three USRREC definitions: for a local and domain Windows user


4.2.8 Example of DOMREC and CPUREC definitions
                         We have explained how to use DOMREC and CPUREC statements to define the
                         network topology for a Tivoli Workload Scheduler network in a Tivoli Workload
                         Scheduler for z/OS end-to-end environment. We now use these statements to
                         define a simple Tivoli Workload Scheduler network in Tivoli Workload Scheduler
                         for z/OS.

                         As an example, Figure 4-16 on page 198 illustrates a simple Tivoli Workload
                         Scheduler network. In this network there is one domain, DOMAIN1, under the
                         master domain (MASTERDM).




                                          Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   197
M ASTERDM

                                                                               z /O S
                                    M a s te r D o m a in
                                        M anager
                                    O PCMASTER




                         D O M A IN 1

                                                     D o m a in           A IX
                                                     M anager             c o p e n h a g e n .d k .ib m .c o m
                                                       F100




                                         F101
                                        B D M for                     F102
                                        D om ain A

                                     A IX                         W in do w s
                                     lon don .u k .ibm .c om      s to c k h o lm .se .ib m .c o m



                Figure 4-16 Simple end-to-end scheduling environment

                Example 4-3 describes the DOMAIN1 domain with the DOMAIN topology
                statement.
                Example 4-3 Domain definition
                DOMREC     DOMAIN(DOMAIN1)                        /* Name of the domain is DOMAIN1 */
                           DOMMMNGR(F100)                         /* F100 workst. is domain mng.   */
                           DOMPARENT(MASTERDM)                    /* Domain parent is MASTERDM     */


                In end-to-end, the master domain (MASTERDM) is always the Tivoli Workload
                Scheduler for z/OS controller. (It is predefined and cannot be changed.) Since
                the DOMAIN1 domain is under the MASTERDM domain, MASTERDM must be
                defined in the DOMPARENT parameter. The DOM;MNGR keyword represents
                the name of the workstation.

                There are three workstations (CPUs) in the DOMAIN1 domain. To define these
                workstations in the Tivoli Workload Scheduler for z/OS end-to-end network, we
                must define three CPURECs, one for each workstation (server) in the network.
                Example 4-4 Workstation (CPUREC) definitions for the three FTWs
                CPUREC     CPUNAME(F100)                          /* Domain manager for DM100                     */
                           CPUOS(AIX)                             /* AIX operating system                         */


198   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
CPUNODE(copenhagen.dk.ibm.com)        /* IP address of CPU (DNS) */
         CPUTCPIP(31281)             /*       TCP port number of NETMAN     */
         CPUDOMAIN(DM100)            /*       The TWS domain name for CPU */
         CPUTYPE(FTA)                /*       This is a FTA CPU type        */
         CPUAUTOLNK(ON)              /*       Autolink is on for this CPU   */
         CPUFULLSTAT(ON)             /*       Full status on for DM         */
         CPURESDEP(ON)               /*       Resolve dependencies on for DM*/
         CPULIMIT(20)                /*       Number of jobs in parallel    */
         CPUTZ(Europe/Copenhagen)    /*       Time zone for this CPU        */
         CPUUSER(twstest)            /*       default user for CPU          */
         SSLLEVEL(OFF)               /*       SSL is not active             */
         SSLPORT(31113)              /*       Default SSL port              */
         FIREWALL(NO)                /*       WS not behind firewall        */
CPUREC   CPUNAME(F101)               /*       fault tolerant agent in DM100 */
         CPUOS(AIX)                  /*       AIX operating system          */
         CPUNODE(london.uk.ibm.com)            /* IP address of CPU (DNS) */
         CPUTCPIP(31281)             /*       TCP port number of NETMAN     */
         CPUDOMAIN(DM100)            /*       The TWS domain name for CPU */
         CPUTYPE(FTA)                /*       This is a FTA CPU type        */
         CPUAUTOLNK(ON)              /*       Autolink is on for this CPU   */
         CPUFULLSTAT(ON)             /*       Full status on for BDM        */
         CPURESDEP(ON)               /*       Resolve dependencies on BDM   */
         CPULIMIT(20)                /*       Number of jobs in parallel    */
         CPUSERVER(A)                /*       Start extra mailman process   */
         CPUTZ(Europe/London)        /*       Time zone for this CPU        */
         CPUUSER(maestro)            /*       default user for ws           */
         SSLLEVEL(OFF)               /*       SSL is not active             */
         SSLPORT(31113)              /*       Default SSL port              */
         FIREWALL(NO)                /*       WS not behind firewall        */
CPUREC   CPUNAME(F102)               /*       fault tolerant agent in DM100 */
         CPUOS(WNT)                  /*       Windows operating system      */
         CPUNODE(stockholm.se.ibm.com)         /* IP address for CPU (DNS) */
         CPUTCPIP(31281)             /*       TCP port number of NETMAN     */
         CPUDOMAIN(DM100)            /*       The TWS domain name for CPU */
         CPUTYPE(FTA)                /*       This is a FTA CPU type        */
         CPUAUTOLNK(ON)              /*       Autolink is on for this CPU   */
         CPUFULLSTAT(OFF)            /*       Full status off for FTA       */
         CPURESDEP(OFF)              /*       Resolve dependencies off FTA */
         CPULIMIT(10)                /*       Number of jobs in parallel    */
         CPUSERVER(A)                /*       Start extra mailman process   */
         CPUTZ(Europe/Stockholm)     /*       Time zone for this CPU        */
         CPUUSER(twstest)            /*       default user for ws           */
         SSLLEVEL(OFF)               /*       SSL is not active             */
         SSLPORT(31113)              /*       Default SSL port              */
         FIREWALL(NO)                /*       WS not behind firewall        */




          Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   199
Because F101 is going to be backup domain manager for F100, F101 is defined
                with CPUFULLSTATUS (ON) and CPURESDEP(ON).

                F102 is a fault-tolerant agent without extra responsibilities, so it is defined with
                CPUFULLSTATUS(OFF) and CPURESDEP(OFF) because dependency
                resolution within the domain is the task of the domain manager. This improves
                performance by reducing network traffic.

                  Note: CPUOS(WNT) applies for all Windows platforms.

                Finally, since F102 runs on a Windows server, we must create at least one
                USRREC definition for this server. In our example, we would like to be able to run
                jobs on the Windows server under either the Tivoli Workload Scheduler
                installation user (twstest) or the database user, databusr.
                Example 4-5 USRREC definition for tws F102 Windows users, twstest and databusr
                USRREC     USRCPU(F102)                   /*   Definition for F102 Windows CPU */
                           USRNAM(twstest)                /*   The user name (local user)      */
                           USRPSW('twspw01')              /*   The password for twstest        */
                USRREC     USRCPU(F102)                   /*   Definition for F102 Windows CPU */
                           USRNAM(databusr)               /*   The user name (local user)      */
                           USRPSW('data01ad')             /*   Password for databusr           */



4.2.9 The JTOPTS TWSJOBNAME() parameter
                With the JTOPTS TWSJOBNAME() parameter, it is possible to specify different
                criteria that Tivoli Workload Scheduler for z/OS should use when creating the job
                name in the Symphony file in USS.

                The syntax for the JTOPTS TWSJOBNAME() parameter is:
                    TWSJOBNAME(EXTNAME/EXTNOCC/JOBNAME/OCCNAME)

                If you do not specify the TWSJOBNAME() parameter, the value OCCNAME is
                used by default.

                When choosing OCCNAME, the job names in the Symphony file will be
                generated with one of the following formats:
                    <X>_<Num>_<Application Name> when the job is created in the Symphony file
                    <X>_<Num>_<Ext>_<Application Name> when the job is first deleted and then
                    recreated in the current plan
                    In these examples, <X> can be J for normal jobs (operations), P for jobs
                    representing pending predecessors, and R for recovery jobs.




200   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
<Num> is the operation number.
                       <Ext> is a sequential decimal number that is increased every time an
                       operation is deleted and then recreated.
                       <Application Name> is the name of the occurrence that the operation belongs
                       to.

                   See Figure 4-17 for an example of how the job names (and job stream names)
                   are generated by default in the Symphony file when JTOPTS
                   TWSJOBNAME(OCCNAME) is specified or defaulted.

                   Note that occurrence in Tivoli Workload Scheduler for z/OS is the same as JSC
                   job stream instance (that is, a job stream or and application that is on the plan in
                   Tivoli Workload Scheduler for z/OS).



         CP     OPC Current Plan                                              Symphony File          Symphony


   Job Stream Instance       Input Arr.   Occurence Token                                   Job Stream Instance
   (Application Occurence)   Time                                                           (Schedule)
   DAILY                     0800         B8FF08015E683C44                                  B8FF08015E683C44
   Operation   Job                                                                          Job Instance
   Number      (Operation)
                                                                                            J_010_DAILY
   010         DLYJOB1
                                                                                            J_015_DAILY
   015         DLYJOB2
                                                                                            J_020_DAILY
   020         DLYJOB3


   Job Stream Instance       Input Arr.   Occurence Token                                   Job Stream Instance
   (Application Occurence)   Time                                                           (Schedule)
   DAILY                     0900         B8FFF05B29182108                                  B8FFF05B29182108
   Operation   Job                                                                          Job Instance
   Number      (Operation)
                                                                                            J_010_DAILY
   010         DLYJOB1
                                                                                            J_015_DAILY
   015         DLYJOB2
                                                                                            J_020_DAILY
   020         DLYJOB3

  Each instance of a job stream in OPC is assigned a unique occurence token. If the job
  stream is added to the TWS Symphony file, the occurence token is used as the job
  stream name in the Symphony file.
Figure 4-17 Generation of job and job stream names in the Symphony file




                                Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling     201
If any of the other values (EXTNAME, EXTNOCC, or JOBNAME) is specified in
                the JTOPTS TWSJOBNAME() parameter, the job name in the Symphony file is
                created according to one of the following formats:
                    <X><Num>_<JobInfo> when the job is created in the Symphony file
                    <X><Num>_<Ext>_<JobInfo> when the job is first deleted and then recreated in
                    the current plan
                    In these examples:
                    <X> can be J for normal jobs (operations), P for jobs representing pending
                    predecessors, and R for recovery jobs. For jobs representing pending
                    predecessors, the job name is in all cases generated by using the OCCNAME
                    criterion. This is because, in the case of pending predecessors, the current
                    plan does not contain the required information (excepting the name of the
                    occurrence) to build the Symphony name according to the other criteria.
                    <Num> is the operation number.
                    <Ext> is the hexadecimal value of a sequential number that is increased every
                    time an operation is deleted and then recreated.
                    <JobInfo> depends on the chosen criterion:
                    – For EXTNAME: <JobInfo> is filled with the first 32 characters of the
                      extended job name associated with that job (if it exists) or with the
                      eight-character job name (if the extended name does not exist).
                      Note that the extended job name, in addition to being defined in the
                      database, must also exist in the current plan.
                    – For EXTNOCC: <JobInfo> is filled with the first 32 characters of the
                      extended job name associated with that job (if it exists) or with the
                      application name (if the extended name does not exist).
                      Note that the extended job name, in addition to being defined in the
                      database, must also exist in the current plan.
                    – For JOBNAME: <JobInfo> is filled with the 8-character job name.

                The criterion that is used to generate a Tivoli Workload Scheduler job name will
                be maintained throughout the entire life of the job.

                  Note: In order to choose the EXTNAME, EXTNOCC, or JOBNAME criterion,
                  the EQQTWSOU data set must have a record length of 160 bytes. Before
                  using any of the above keywords, you must migrate the EQQTWSOU data set
                  if you have allocated the data set with a record length less than 160 bytes.
                  Sample EQQMTWSO is available to migrate this data set from record length
                  120 to 160 bytes.




202   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Limitations when using the EXTNAME and EXTNOCC criteria:
              The job name in the Symphony file can contain only alphanumeric characters,
              dashes, and underscores. All other characters that are accepted for the
              extended job name are converted into dashes. Note that a similar limitation
              applies with JOBNAME: When defining members of partitioned data sets
              (such as the script or the job libraries), national characters can be used, but
              they are converted into dashes in the Symphony file.
              The job name in the Symphony file must be in uppercase. All lowercase
              characters in the extended name are automatically converted to uppercase by
              Tivoli Workload Scheduler for z/OS.

            Note: Using the job name (or the extended name as part of the job name) in
            the Symphony file implies that it becomes a key for identifying the job. This
            also means that the extended name - job name is used as a key for
            addressing all events that are directed to the agents. For this reason, be aware
            of the following facts for the operations that are included in the Symphony file:
               Editing the extended name is inhibited for operations that are created when
               the TWSJOBNAME keyword was set to EXTNAME or EXTNOCC.
               Editing the job name is inhibited for operations created when the
               TWSJOBNAME keyword was set to EXTNAME or JOBNAME.


4.2.10 Verify end-to-end installation in Tivoli Workload Scheduler for
z/OS
           When all installation tasks as described in the previous sections have been
           completed, and all initialization statements and data sets related to end-to-end
           scheduling have been defined in the Tivoli Workload Scheduler for z/OS
           controller, end-to-end server, and plan extend, replan, and Symphony renew
           batch jobs, it is time to do the first verification of the mainframe part.

            Note: This verification can be postponed until workstations for the
            fault-tolerant agents have been defined in Tivoli Workload Scheduler for z/OS
            and, optionally, Tivoli Workload Scheduler has been installed on the
            fault-tolerant agents (the Tivoli Workload Scheduler servers or agents).


           Verify the Tivoli Workload Scheduler for z/OS controller
           After the customization steps haven been completed, simply start the Tivoli
           Workload Scheduler controller. Check the controller message log (EQQMLOG)
           for any unexpected error or warning messages. All Tivoli Workload Scheduler
           z/OS messages are prefixed with EQQ. See the IBM Tivoli Workload Scheduler



                      Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   203
for z/OS Messages and Codes Version 8.2 (Maintenance Release April 2004),
                SC32-1267.

                Because we have activated the end-to-end feature in the controller initialization
                statements by specifying the OPCOPTS TPLGYSRV() parameter and we have
                asked the controller to start our end-to-end server by the SERVERS(TWSCE2E)
                parameter, we will see messages as shown in Example 4-6 in the Tivoli
                Workload Scheduler for z/OS controller message log (EQQMLOG).
                Example 4-6 IBM Tivoli Workload Scheduler for z/OS controller messages for end-to-end
                EQQZ005I   OPC SUBTASK E2E ENABLER      IS BEING STARTED
                EQQZ085I   OPC SUBTASK E2E SENDER       IS BEING STARTED
                EQQZ085I   OPC SUBTASK E2E RECEIVER     IS BEING STARTED
                EQQG001I   SUBTASK E2E ENABLER HAS STARTED
                EQQG001I   SUBTASK E2E SENDER HAS STARTED
                EQQG001I   SUBTASK E2E RECEIVER HAS STARTED
                EQQW097I   END-TO-END RECEIVER STARTED SYNCHRONIZATION WITH THE EVENT MANAGER
                EQQW097I         0 EVENTS IN EQQTWSIN WILL BE REPROCESSED
                EQQW098I   END-TO-END RECEIVER FINISHED SYNCHRONIZATION WITH THE EVENT MANAGER
                EQQ3120E   END-TO-END TRANSLATOR SERVER PROCESS IS NOT AVAILABLE
                EQQZ193I   END-TO-END TRANSLATOR SERVER PROCESSS NOW IS AVAILABLE


                  Note: If you do not see all of these messages in your controller message log,
                  you probably have not applied all available service updates. See 3.4.2,
                  “Service updates (PSP bucket, APARs, and PTFs)” on page 117.

                The messages in Example 4-6 are extracted from the Tivoli Workload Scheduler
                for z/OS controller message log. There will be several other messages between
                the messages shown in Example 4-6 if you look in your controller message log.

                If the Tivoli Workload Scheduler for z/OS controller is started with empty
                EQQTWSIN and EQQTWSOU data sets, messages shown in Example 4-7 will
                be issued in the controller message log (EQQMLOG).
                Example 4-7 Formatting messages when EQQTWSOU and EQQTWSIN are empty
                EQQW030I   A   DISK   DATA   SET   WILL BE FORMATTED, DDNAME = EQQTWSOU
                EQQW030I   A   DISK   DATA   SET   WILL BE FORMATTED, DDNAME = EQQTWSIN
                EQQW038I   A   DISK   DATA   SET   HAS BEEN FORMATTED, DDNAME = EQQTWSOU
                EQQW038I   A   DISK   DATA   SET   HAS BEEN FORMATTED, DDNAME = EQQTWSIN




204   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Note: In the Tivoli Workload Scheduler for z/OS system messages, there will
 also be two IEC031I messages related to the formatting messages in
 Example 4-7. These messages can be ignored because they are related to
 the formatting of the EQQTWSIN and EQQTWSOU data sets.

 The IEC031I messages look like:
 IEC031I D37-04,IFG0554P,TWSC,TWSC,EQQTWSOU,........................
 IEC031I D37-04,IFG0554P,TWSC,TWSC,EQQTWSIN,.............................


The messages in Example 4-8 and Example 4-9 show that the controller is
started with the end-to-end feature active and that it is ready to run jobs in the
end-to-end environment.

When the Tivoli Workload Scheduler for z/OS controller is stopped, the
end-to-end related messages shown in Example 4-8 will be issued.
Example 4-8 Controller messages for end-to-end when controller is stopped
EQQG003I   SUBTASK E2E   RECEIVER HAS ENDED
EQQG003I   SUBTASK E2E   SENDER HAS ENDED
EQQZ034I   OPC SUBTASK   E2E SENDER       HAS ENDED.
EQQZ034I   OPC SUBTASK   E2E RECEIVER     HAS ENDED.
EQQZ034I   OPC SUBTASK   E2E ENABLER      HAS ENDED.


Verify the Tivoli Workload Scheduler for z/OS server
After the customization steps haven been completed for the Tivoli Workload
Scheduler end-to-end server started task, simply start the end-to-end server
started task. Check the server message log (EQQMLOG) for any unexpected
error or warning messages. All Tivoli Workload Scheduler z/OS messages are
prefixed with EQQ. See the IBM Tivoli Workload Scheduler for z/OS Messages
and Codes, Version 8.2 (Maintenance Release April 2004), SC32-1267.

When the end-to-end server is started for the first time, check that the messages
shown in Example 4-9 appear in the Tivoli Workload Scheduler for z/OS
end-to-end server EQQMLOG.
Example 4-9 End-to-end server messages first time the end-to-end server is started
EQQPH00I SERVER TASK HAS STARTED
EQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTED
EQQZ024I Initializing wait parameters
EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/translator" has been started,
          pid is 67371783
EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/netman" has been started,
          pid is 67371919
EQQPT56W The /DD:EQQTWSIN queue has not been formatted yet



            Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   205
EQQPT22I Input Translator thread stopped until new Symphony will be available


                The messages shown in Example 4-9 on page 205 is normal when the Tivoli
                Workload Scheduler for z/OS end-to-end server is started for the first time and
                there is no Symphony file created.

                Furthermore the end-to-end server message EQQPT56W is normally only
                issued for the EQQTWSIN data set, if the EQQTWSIN and EQQTWSOU data
                sets are both empty and there is no Symphony file created.

                If the Tivoli Workload Scheduler for z/OS controller and end-to-end server is
                started with an empty EQQTWSOU data set (for example reallocated with a new
                record length), message EQQPT56W will be issued for the EQQTWSOU data
                set:
                    EQQPT56W The /DD:EQQTWSOU queue has not been formatted yet

                If a Symphony file has been created the end-to-end server messages log
                contains the messages in the following example.
                Example 4-10 End-to-end server messages when server is started with Symphony file
                EQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTED
                EQQZ024I Initializing wait parameters
                EQQPT01I Program "/usr/lpp/TWS/TWS820/bin/translator" has been started,
                          pid is 33817341
                EQQPT01I Program "/usr/lpp/TWS/TWS820/bin/netman" has been started,
                          pid is 262958
                EQQPT20I Input Translator waiting for Batchman and Mailman are started
                EQQPT21I Input Translator finished waiting for Batchman and Mailman


                The messages shown in Example 4-10 are the normal start-up messages for an
                Tivoli Workload Scheduler for z/OS end-to-end server with a Symphony file.

                When the end-to-end server is stopped the messages shown in Example 4-11
                should be issued in the EQQMLOG.
                Example 4-11 End-to-end server messages when server is stopped
                EQQZ000I   A STOP OPC COMMAND HAS BEEN RECEIVED
                EQQPT04I   Starter has detected a stop command
                EQQPT40I   Input Translator thread is shutting down
                EQQPT12I   The Netman process (pid=262958) ended successfully
                EQQPT40I   Output Translator thread is shutting down
                EQQPT53I   Output Translator thread has terminated
                EQQPT53I   Input Translator thread has terminated
                EQQPT40I   Input Writer thread is shutting down
                EQQPT53I   Input Writer thread has terminated
                EQQPT12I   The Translator process (pid=33817341) ended successfully



206   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
EQQPT10I All Starter's sons ended
           EQQPH34I THE END-TO-END PROCESSES HAVE ENDED
           EQQPH01I SERVER TASK ENDED


           After successful completion of the verification, move on to the next step in the
           end-to-end installation.



4.3 Installing Tivoli Workload Scheduler in an
end-to-end environment
           In this section, we describe how to install Tivoli Workload Scheduler in an
           end-to-end environment.

            Important: Maintenance releases of Tivoli Workload Scheduler are made
            available about every three months. We recommend that, before installing,
            check for the latest available update at:
            ftp://ftp.software.ibm.com

            The latest release (as we write this book) for IBM Tivoli Workload Scheduler is
            8.2-TWS-FP04 and is available at:
            ftp://ftp.software.ibm.com/software/tivoli_support/patches/patches_8.2.0/8.2
            .0-TWS-FP04/


           Installing a Tivoli Workload Scheduler agent in an end-to-end environment is not
           very different from installing Tivoli Workload Scheduler when Tivoli Workload
           Scheduler for z/OS is not involved. Follow the installation instructions in the IBM
           Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273. The
           main differences to keep in mind are that in an end-to-end environment, the
           master domain manager is always the Tivoli Workload Scheduler for z/OS engine
           (known by the Tivoli Workload Scheduler workstation name OPCMASTER), and
           the local workstation name of the fault-tolerant workstation is limited to four
           characters.


4.3.1 Installing multiple instances of Tivoli Workload Scheduler on
one machine
           As mentioned in Chapter 2, “End-to-end scheduling architecture” on page 25,
           there are often good reasons to install multiple instances of the Tivoli Workload
           Scheduler engine on the same machine. If you plan to do this, there are some




                      Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   207
important considerations that should be made. Careful planning before
                installation can save you a considerable amount of work later.

                The following items must be unique for each instance of the Tivoli Workload
                Scheduler engine that is installed on a computer:
                    The Tivoli Workload Scheduler user name and ID associated with the
                    instance
                    The home directory of the Tivoli Workload Scheduler user
                    The component group (only on tier-2 platforms: LinuxPPC, IRIX, Tru64 UNIX,
                    Dynix, HP-UX 11i Itanium)
                    The netman port number (set by the nm port option in the localopts file)

                First, the user name and ID must be unique. There are many different ways to
                these users. Choose user names that make sense to you. It may simplify things
                to create a group called IBM Tivoli Workload Scheduler and make all Tivoli
                Workload Scheduler users members of this group. This would enable you to add
                group access to files to grant access to all Tivoli Workload Scheduler users.
                When installing Tivoli Workload Scheduler on UNIX, the Tivoli Workload
                Scheduler user is specified by the -uname option of the UNIX customize script. It
                is important to specify the Tivoli Workload Scheduler user because otherwise the
                customize script will choose the default user name maestro. Obviously, if you plan
                to install multiple Tivoli Workload Scheduler engines on the same computer, they
                cannot both be installed as the user maestro.

                Second, the home directory must be unique. In order to keep two different Tivoli
                Workload Scheduler engines completely separate, each one must have its own
                home directory.

                  Note: Previous versions of Tivoli Workload Scheduler installed files into a
                  directory called unison in the parent directory of the Tivoli Workload Scheduler
                  home directory. Tivoli Workload Scheduler 8.2 simplifies things by placing the
                  unison directory inside the Tivoli Workload Scheduler home directory.

                  The unison directory is a relic of the days when Unison Software’s Maestro
                  program (the direct ancestor of IBM Tivoli Workload Scheduler) was one of
                  several programs that all shared some common data. The unison directory
                  was where the common data shared between Unison’s various products was
                  stored. Important information is still stored in this directory, including the
                  workstation database (cpudata) and the NT user database (userdata). The
                  Tivoli Workload Scheduler Security file is no longer stored in the unison
                  directory; it is now stored in the Tivoli Workload Scheduler home directory.




208   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 4-18 should give you an idea of how two Tivoli Workload Scheduler
engines might be installed on the same computer. You can see that each engine
has its own separate Tivoli Workload Scheduler directory.



                                                        /



                                                      tivoli



                                                      tws

 TWS Engine A                                                                                        TWS Engine B


                               tws-a                                                tws-b




                                                  …                                                         …
     network       Security       bin    mozart                   network    Security      bin     mozart


                                                  …                                                         …

 cpudata   userdata           mastsked    jobs                 cpudata   userdata       mastsked    jobs


Figure 4-18 Two separate Tivoli Workload Scheduler engines on one computer

Example 4-12 shows the /etc/passwd entries that correspond to the two Tivoli
Workload Scheduler users.
Example 4-12 Excerpt from /etc/passwd: two different Tivoli Workload Scheduler users
tws-a:!:31111:9207:TWS Engine A User:/tivoli/tws/tws-a:/usr/bin/ksh
tws-b:!:31112:9207:TWS Engine B User:/tivoli/tws/tws-b:/usr/bin/ksh


Note that each Tivoli Workload Scheduler user has a unique name, ID, and home
directory.

On tier-2 platforms only (Linux/PPC, IRIX, Tru64 UNIX, Dynix, HP-UX
11i/Itanium), Tivoli Workload Scheduler still uses the /usr/unison/components
file to keep track of each installed Tivoli Workload Scheduler engine. Each Tivoli
Workload Scheduler engine on a tier-2 platform computer must have a unique
component group name. The component group is arbitrary; it is just a name that
is used by Tivoli Workload Scheduler programs to keep each engine separate.
The name of the component group is entirely up to you. It can be specified using
the -group option of the UNIX customize script during installation on a tier-2
platform machine. It is important to specify a different component group name for
each instance of the Tivoli Workload Scheduler engine installed on a computer.


                Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling                   209
Component groups are stored in the file /usr/unison/components. This file
                contains two lines for each component group.

                Example 4-13 shows the components file corresponding to the two Tivoli
                Workload Scheduler engines.
                Example 4-13 Sample /usr/unison/components file for tier-2 platforms
                netman    1.8.1 /tivoli/TWS/TWS-A/tws   TWS-Engine-A
                maestro   8.1     /tivoli/TWS/TWS-A/tws   TWS-Engine-A
                netman    1.8.1.1 /tivoli/TWS/TWS-B/tws   TWS-Engine-B
                maestro   8.1     /tivoli/TWS/TWS-B/tws   TWS-Engine-B


                The component groups are called TWS-Engine-A and TWS-Engine-B. For each
                component group, the version and path for netman and maestro (the Tivoli
                Workload Scheduler engine) are listed. In this context, maestro refers simply to
                the Tivoli Workload Scheduler home directory.

                  Important: The /usr/unison/components file is used only on tier-2 platforms.

                On tier-1 platforms (such as AIX, Linux/x86, Solaris, HP-UX, and Windows XP),
                there is no longer a need to be concerned with component groups because the
                new ISMP installer automatically keeps track of each installed Tivoli Workload
                Scheduler engine. It does so by writing data about each engine to a file called
                /etc/TWS/TWS Registry.dat.

                  Important: Do not edit or remove the /etc/TWS/TWS Registry.dat file because
                  this could cause problems with uninstalling Tivoli Workload Scheduler or with
                  installing fix packs. Do not remove this file unless you intend to remove all
                  installed Tivoli Workload Scheduler 8.2 engines from the computer.

                Finally, because netman listens for incoming TCP link requests from other Tivoli
                Workload Scheduler agents, it is important that the netman program for each
                Tivoli Workload Scheduler engine listen to a unique port. This port is specified by
                the nm port option in the Tivoli Workload Scheduler localopts file. If you change
                this option, you must shut down netman and start it again to make the change
                take effect.

                In our test environment, we chose a netman port number and user ID that was
                the same for each Tivoli Workload Scheduler engine. This makes it easier to
                remember and simpler when troubleshooting. Table 4-4 on page 211 shows the
                names and numbers we used in our testing.




210   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Table 4-4 If possible, choose user IDs and port numbers that are the same
            User name              User ID                 Netman port

            tws-a                  31111                   31111

            tws-b                  31112                   31112


4.3.2 Verify the Tivoli Workload Scheduler installation
           Start Tivoli Workload Scheduler and verify that it starts without any error
           messages.

           Note that if there are no active workstations in Tivoli Workload Scheduler for
           z/OS for the Tivoli Workload Scheduler agent, only the netman process will be
           started. But you can verify that the netman process is started and that it listens to
           the IP port number that you have decided to use in your end-to-end environment.



4.4 Define, activate, verify fault-tolerant workstations
           To be able to define jobs in Tivoli Workload Scheduler for z/OS to be scheduled
           on FTWs, the workstations must be defined in Tivoli Workload Scheduler for
           z/OS controller.

           The workstations that are defined via the CPUREC keyword should also be
           defined in the Tivoli Workload Scheduler for z/OS workstation database before
           they can be activated in the Tivoli Workload Scheduler for z/OS plan. The
           workstations are defined the same way as computer workstations in Tivoli
           Workload Scheduler for z/OS, except they need a special flag: fault tolerant. This
           flag is used to indicate in Tivoli Workload Scheduler for z/OS that these
           workstations should be treated as FTWs.

           When the FTWs have been defined in the Tivoli Workload Scheduler for z/OS
           workstation database, they can be activated in the Tivoli Workload Scheduler for
           z/OS plan by either running a plan replan or plan extend batch job.

           The process is as follows:
           1. Create a CPUREC definition for the workstation as described in “CPUREC
              statement” on page 187.
           2. Define the FTW in the Tivoli Workload Scheduler for z/OS workstation
              database. Remember to set it to fault tolerant.
           3. Run Tivoli Workload Scheduler for z/OS plan replan or plan extend to activate
              the workstation definition in Tivoli Workload Scheduler for z/OS.
           4. Verify that the FTW gets active and linked.


                      Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   211
5. Define jobs and job streams on the newly created and activated FTW as
                   described in 4.5, “Creating fault-tolerant workstation job definitions and job
                   streams” on page 217.

                  Important: Please note that order of the operations in this process is
                  important.


4.4.1 Define fault-tolerant workstation in Tivoli Workload Scheduler
controller workstation database
                A fault-tolerant workstation can be defined either from Tivoli Workload Scheduler
                for z/OS legacy ISPF dialogs (use option 1.1 from main menu) or in the JSC.

                In the following steps, we show how to define an FTW from the JSC (see
                Figure 4-19 on page 213):
                1. Open the Actions Lists, select New Workstation, then select the instance
                   for the Tivoli Workload Scheduler for z/OS controller where the workstation
                   should be defined (TWSC-zOS in our example).
                2. The Properties - Workstation in Database window opens.
                3. Select the Fault Tolerant check box and fill in the Name field (the
                   four-character name of the FTW) and, optionally, the Description field. See
                   Figure 4-19 on page 213.

                      Note: It is a good standard to use the first part of the description field to list
                      the DNS name or host name for the FTW. This makes it easier to
                      remember which server or machine the four-character workstation name in
                      Tivoli Workload Scheduler for z/OS relates to. You can add up to 32
                      alphanumeric characters in the description field.

                4. Save the new workstation definition by clicking OK.




212   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Note: When we used the JSC to create FTWs as described, we
                 sometimes received this error:
                    GJS0027E Cannot save the workstation xxxx.
                    Reason: EQQW787E FOR FT WORKSTATIONS RESOURCES CANNOT BE USED AT
                    PLANNING

                 If you receive this error when creating the FTW from the JSC, then select
                 the Resources tab (see Figure 4-19 on page 213) and un-check the Used
                 for planning check box for Resource 1 and Resource 2. This must be
                 done before selecting the Fault Tolerant check box on the General tab.




            Figure 4-19 Defining a fault-tolerant workstation from the JSC


4.4.2 Activate the fault-tolerant workstation definition
            Fault-tolerant workstation definitions can be activated in the Tivoli Workload
            Scheduler for z/OS plan either by running the replan or the extend plan programs
            in the Tivoli Workload Scheduler for z/OS controller.




                       Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   213
When running the replan or extend program, Tivoli Workload Scheduler for z/OS
                creates (or recreates) the Symphony file and distributes it to the domain
                managers at the first level. These domain managers, in turn, distribute the
                Symphony file to their subordinate fault-tolerant agents and domain managers,
                and so on. If the Symphony file is successfully created and distributed, all defined
                FTWs should be linked and active.

                We run the replan program and verify that the Symphony file is created in the
                end-to-end server. We also verify that the FTWs become available and have
                linked status in the Tivoli Workload Scheduler for z/OS plan.


4.4.3 Verify that the fault-tolerant workstations are active and linked
                First, it should be verified that there is no warning or error message in the replan
                batch job (EQQMLOG). The message log should show that all topology
                statements (DOMREC, CPUREC, and USRREC) have been accepted without
                any errors or warnings.

                Verify messages in plan batch job
                For a successful creation of the Symphony file, the message log should show
                messages similar to those in Example 4-14.
                Example 4-14 Plan batch job EQQMLOG messages when Symphony file is created
                EQQZ014I   MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000
                EQQZ013I   NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER
                EQQZ014I   MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER IS: 0000
                EQQQ502I   SPECIAL RESOURCE DATASPACE HAS BEEN CREATED.
                EQQQ502I   00000020 PAGES ARE USED FOR 00000100 SPECIAL RESOURCE RECORDS.
                EQQ3011I   WORKSTATION F100 SET AS DOMAIN MANAGER FOR DOMAIN DM100
                EQQ3011I   WORKSTATION F200 SET AS DOMAIN MANAGER FOR DOMAIN DM200
                EQQ3105I   A NEW CURRENT PLAN (NCP) HAS BEEN CREATED
                EQQ3106I   WAITING FOR SCP
                EQQ3107I   SCP IS READY: START JOBS ADDITION TO SYMPHONY FILE
                EQQ4015I   RECOVERY JOB OF F100DJ01 HAS NO JOBWS KEYWORD SPECIFIED,
                EQQ4015I   THE WORKSTATION F100 OF JOB F100DJ01 IS USED
                EQQ3108I   JOBS ADDITION TO SYMPHONY FILE COMPLETED
                EQQ3101I   0000019 JOBS ADDED TO THE SYMPHONY FILE FROM THE CURRENT PLAN
                EQQ3087I   SYMNEW FILE HAS BEEN CREATED


                Verify messages in the end-to-end server message log
                In the Tivoli Workload Scheduler for z/OS end-to-end server message log, we
                see the messages shown in Example 4-15. These messages show that the
                Symphony file has been created by the plan replan batch jobs and that it was
                possible for the end-to-end server to switch to the new Symphony file.



214   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Example 4-15 End-to-end server messages when Symphony file is created
EQQPT30I   Starting switching Symphony
EQQPT12I   The Mailman process (pid=Unknown) ended successfully
EQQPT12I   The Batchman process (pid=Unknown) ended successfully
EQQPT22I   Input Translator thread stopped until new Symphony will be available
EQQPT31I   Symphony successfully switched
EQQPT20I   Input Translator waiting for Batchman and Mailman are started
EQQPT21I   Input Translator finished waiting for Batchman and Mailman
EQQPT23I   Input Translator thread is running


Verify messages in the controller message log
The Tivoli Workload Scheduler for z/OS controller shows the messages in
Example 4-16, which indicate that the Symphony file was created successfully
and that the fault-tolerant workstations are active and linked.
Example 4-16 Controller messages when Symphony file is created
EQQN111I   SYMNEW FILE HAS BEEN CREATED
EQQW090I   THE NEW SYMPHONY FILE HAS BEEN SUCCESSFULLY SWITCHED
EQQWL10W   WORK STATION F100, HAS BEEN SET TO LINKED STATUS
EQQWL10W   WORK STATION F100, HAS BEEN SET TO ACTIVE   STATUS
EQQWL10W   WORK STATION F101, HAS BEEN SET TO LINKED STATUS
EQQWL10W   WORK STATION F102, HAS BEEN SET TO LINKED STATUS
EQQWL10W   WORK STATION F101, HAS BEEN SET TO ACTIVE   STATUS
EQQWL10W   WORK STATION F102, HAS BEEN SET TO ACTIVE   STATUS


Verify that fault-tolerant workstations are active and linked
After the replan job has completed and output messages have been displayed,
the FTWs are checked using the JSC instance pointing Tivoli Workload
Scheduler for z/OS controller (Figure 4-20).

The Fault Tolerant column indicates that it is an FTW. The Linked column
indicates whether the workstation is linked. The Status column indicates whether
the mailman process is up and running on the FTW.




Figure 4-20 Status of FTWs in the Tivoli Workload Scheduler for z/OS plan




            Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   215
The F200 workstation is Not Available because we have not installed a Tivoli
                Workload Scheduler fault-tolerant workstation on this machine yet. We have
                prepared for a future installation of the F200 workstation by creating the related
                CPUREC definitions for F200 and defined the FTW (F200) in the Tivoli Workload
                Scheduler controller workstation database.

                  Tip: If the workstation does not link as it should, the cause could be that the
                  writer process has not initiated correctly or the run number for the Symphony
                  file on the FTW is not the same as the run number on the master. Mark the
                  unlinked workstations and right-click to open a pop-up menu where you can
                  click Link to try to link the workstation.

                  The run number for the Symphony file in the end-to-end server can be seen
                  from legacy ISPF panels in option 6.6 from the main menu.

                Figure 4-21 shows the status of the same FTWs, as it is shown in the JSC, when
                looking at the Symphony file at domain manager F100.

                Note that is much more information is available for each FTW. For example, in
                Figure 4-21 we can see that jobman and writer are running and that we can run
                20 jobs in parallel on the FTWs (the Limit column). Also note the information in
                the Run, CPU type, and Domain columns.

                The information shown in Figure 4-21 is read from the Symphony file and
                generated by the plan programs based on the specifications in CPUREC and
                DOMREC definitions. This is one of the reasons why we suggest activating
                support for JSC when running end-to-end scheduling with Tivoli Workload
                Scheduler for z/OS.

                Note the status of the OPCMASTER workstation is correct and also remember
                that the OPCMASTER workstation and the MASTERDM domain is predefined in
                Tivoli Workload Scheduler for z/OS and cannot be changed.

                Jobman is not running on OPCMASTER (in USS in the end-to-end server),
                because the end-to-end server is not supposed to run jobs in USS. So the
                information that jobman is not running on the OPCMASTER workstation is OK.




                Figure 4-21 Status of FTWs in the Symphony file on domain manager F100



216   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
4.5 Creating fault-tolerant workstation job definitions
and job streams
           When the FTWs are active and linked in Tivoli Workload Scheduler for z/OS, you
           can run jobs on these workstations.

           To submit work to the FTWs in Tivoli Workload Scheduler for z/OS, you should:
           1. Define the script (the JCL or the task) that should be executed on the FTW,
              (that is, on the server).
              When defining scripts in Tivoli Workload Scheduler for z/OS, it is important to
              remember that the script can be placed central in the Tivoli Workload
              Scheduler for z/OS job library or non-centralized on the FTW (on the Tivoli
              Workload Scheduler server).
              Definitions of scripts are found in:
              – 4.5.1, “Centralized and non-centralized scripts” on page 217
              – 4.5.2, “Definition of centralized scripts” on page 219,
              – 4.5.3, “Definition of non-centralized scripts” on page 221
              – 4.5.4, “Combination of centralized script and VARSUB, JOBREC
                parameters” on page 232
           2. Create a job stream (application) in Tivoli Workload Scheduler for z/OS and
              add the job (operation) defined in step 1.
              It is possible to add the job (operation) to an existing job stream and create
              dependencies between jobs on FTWs and jobs on mainframe.
              Definition of FTW jobs and job streams in Tivoli Workload Scheduler for z/OS
              is found in 4.5.5, “Definition of FTW jobs and job streams in the controller” on
              page 234.


4.5.1 Centralized and non-centralized scripts
           As described in “Tivoli Workload Scheduler for z/OS end-to-end database
           objects” on page 69, a job can use two kinds of scripts: centralized or
           non-centralized.

           A centralized script is a script that resides in controller job library (EQQJBLIB
           dd-card, also called JOBLIB) and that is downloaded to the FTW every time the
           job is submitted. Figure 4-22 on page 218 illustrates the relationship between the
           centralized script job definition and member name in the job library (JOBLIB).




                      Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   217
JOBLIB(AIXHOUSP)
                                                                         //*%OPC SCAN
                                                                         //* OPC Comment: This job …………..
                                                                         //*%OPC RECOVER
                                                                         echo 'OPC occurence plan date is:
                                                                         rmstdlist -p 10




                                                                     IBM Tivoli Workload Scheduler for z/OS
                                                                     job library (JOBLIB)




Figure 4-22 Centralized script defined in controller job library (JOBLIB)

                  A non-centralized script is a script that is defined in the SCRPTLIB and that
                  resides on the FTW. Figure 4-23 on page 219 shows the relationship between
                  the job definition and the member name in the script library (EQQSCLIB).




218     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
EQQSCLIB(AIXHOUSP)
                                                                     VARSUB
                                                                          TABLES(IBMGLOBAL)
                                                                     JOBREC
                                                                          JOBSCR('/tivoli/tws/scripts/rc_rc.
                                                                          JOBUSR(%DISTUID.)
                                                                          RCCONDSUC('((RC<16) AND (RC<>8))
                                                                     RECOVERY
                                                                            OPTION(RERUN)
                                                                            MESSAGE('Reply OK to rerun job')
                                                                            JOBCMD('ls')
                                                                            JOBUSR(%DISTUID.) SUB




                                                                    IBM Tivoli Workload Scheduler for z/OS
                                                                    script library (EQQSCLIB)




Figure 4-23 Non-centralized script defined in controller script library (EQQSCLIB)


4.5.2 Definition of centralized scripts
                  Define the centralized script job (operation) in a Tivoli Workload Scheduler for
                  z/OS job stream (application) with the centralized script option set to Y (Yes). See
                  Figure 4-24 on page 220.

                   Note: The default is N (No) for all operations in Tivoli Workload Scheduler for
                   z/OS.




                              Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling    219
Centralized
  script




Figure 4-24 Centralized script option set in ISPF panel or JSC window

                 A centralized script is a script that resides in the Tivoli Workload Scheduler for
                 z/OS JOBLIB and that is downloaded to the fault-tolerant agent every time the
                 job is submitted.

                 The centralized script is defined the same way as a normal job JCL in Tivoli
                 Workload Scheduler for z/OS.
                 Example 4-17 Centralized script for job AIXHOUSP defined in controller JOBLIB
                 EDIT       TWS.V8R20.JOBLIB(AIXHOUSP) - 01.02              Columns 00001 00072
                  Command ===>                                                  Scroll ===> CSR
                  ****** ***************************** Top of Data ******************************
                  000001 //*%OPC SCAN
                  000002 //* OPC Comment: This job calls TWS rmstdlist script.
                  000003 //* OPC ======== - The rmstdlist script is called with -p flag and
                  000004 //* OPC            with parameter 10.
                  000005 //* OPC          - This means that the rmstdlist script will print
                  000006 //* OPC            files in the stdlist directory older than 10 days.
                  000007 //* OPC          - If rmstdlist ends with RC in the interval from 1
                  000008 //* OPC            to 128, OPC will add recovery application
                  000009 //* OPC            F100CENTRECAPPL.



220    End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
000010   //* OPC
            000011   //*%OPC RECOVER JOBCODE=(1-128),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO)
            000012   //* OPC
            000013   echo 'OPC occurrence plan date is: &ODMY1.'
            000014   rmstdlist -p 10
            ******   **************************** Bottom of Data ****************************


            In the centralized script in Example 4-17 on page 220, we are running the
            rmstdlist program that is delivered with Tivoli Workload Scheduler. In the
            centralized script, we use Tivoli Workload Scheduler for z/OS Automatic
            Recovery as well as JCL variables.

            Rules when creating centralized scripts
            Follow these rules when creating the centralized scripts in the Tivoli Workload
            Scheduler for z/OS JOBLIB:
               Each line starts in column 1 and ends in column 80.
               A backslash () in column 80 can be used to continue script lines with more
               than 80 characters.
               Blanks at the end of a line are automatically removed.
               Lines that start with //* OPC, //*%OPC, or //*>OPC are used for comments,
               variable substitution directives, and automatic job recovery. These lines are
               automatically removed before the script is downloaded to the FTA.


4.5.3 Definition of non-centralized scripts
            Non-centralized scripts are defined in a special partitioned data set, EQQSCLIB,
            that is allocated in the Tivoli Workload Scheduler for z/OS controller started task
            procedure and used to store the job or task definitions for FTA jobs. The script
            (the JCL) resides on the fault-tolerant agent.

             Note: This is the default behavior in Tivoli Workload Scheduler for z/OS for
             fault-tolerant agent jobs.

            You must use the JOBREC statement in every SCRPTLIB member to specify the
            script or command to run. In the SCRPTLIB members, you can also specify the
            following statements:
               VARSUB to use the Tivoli Workload Scheduler for z/OS automatic substitution
               of variables when the Symphony file is created or when an operation on an
               FTW is added to the current plan dynamically.
               RECOVERY to use the Tivoli Workload Scheduler recovery.




                         Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   221
Example 4-18 shows the syntax for the VARSUB, JOBREC, and RECOVERY
                statements.
                Example 4-18 Syntax for VARSUB, JOBREC, and RECOVERY statements
                VARSUB
                  TABLES(GLOBAL|tab1,tab2,..|APPL)
                  PREFIX(’char’)
                  BACKPREF(’char’)
                  VARFAIL(YES|NO)
                  TRUNCATE(YES|NO)
                JOBREC
                  JOBSCR|JOBCMD (’task’)
                  JOBUSR (’username’)
                  INTRACTV(YES|NO)
                  RCCONDSUC(’success condition’)
                RECOVERY
                  OPTION(STOP|CONTINUE|RERUN)
                  MESSAGE(’message’)
                  JOBCMD|JOBSCR(’task’)
                  JOBUSR (’username’)
                  JOBWS(’wsname’)
                  INTRACTV(YES|NO)
                  RCCONDSUC(’success condition’)


                If you define a job with a SCRPTLIB member in the Tivoli Workload Scheduler for
                z/OS database that contains errors, the daily planning batch job sets the status
                of that job to failed in the Symphony file. This change of status is not shown in
                the Tivoli Workload Scheduler for z/OS interface. You can find the messages that
                explain the error in the log of the daily planning batch job.

                If you dynamically add a job to the plan in Tivoli Workload Scheduler for z/OS
                whose associated SCRPTLIB member contains errors, the job is not added. You
                can find the messages that explain this failure in the controller EQQMLOG.

                Rules when creating JOBREC, VARSUB, or RECOVERY statements
                Each statement consists of a statement name, keywords, and keyword values,
                and follows TSO command syntax rules. When you specify SCRPTLIB
                statements, follow these rules:
                    Statement data must be in columns 1 through 72. Information in columns 73
                    through 80 is ignored.
                    A blank serves as the delimiter between two keywords; if you supply more
                    than one delimiter, the extra delimiters are ignored.
                    Continuation characters and blanks are not used to define a statement that
                    continues on the next line.




222   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Values for keywords are contained within parentheses. If a keyword can have
   multiple values, the list of values must be separated by valid delimiters.
   Delimiters are not allowed between a keyword and the left parenthesis of the
   specified value.
   Type /* to start a comment and */ to end a comment. A comment can span
   record images in the parameter member and can appear anywhere except in
   the middle of a keyword or a specified value.
   A statement continues until the next statement or until the end of records in
   the member.
   If the value of a keyword includes spaces, enclose the value within single or
   double quotation marks as in Example 4-19.
Example 4-19 JOBCMD and JOBSCR examples
JOBCMD(’ls la’)
JOBSCR(‘C:/USERLIB/PROG/XME.EXE’)
JOBSCR(“C:/USERLIB/PROG/XME.EXE”)
JOBSCR(“C:/USERLIB/PROG/XME.EXE ‘THIS IS THE PARAMETER LIST’ “)
JOBSCR(‘C:/USERLIB/PROG/XME.EXE “THIS IS THE PARAMETER LIST” ‘)


Description of the VARSUB statement
The VARSUB statement defines the variable substitution options. This statement
must always be the first one in the members of the SCRPTLIB. For more
information about the variable definition, see IBM Tivoli Workload Scheduler for
z/OS Managing the Workload, Version 8.2 (Maintenance Release April 2004),
SC32-1263.

 Note: Can be used in combination with a job that is defined with centralized
 script.

Figure 4-25 shows the format of the VARSUB statement.




Figure 4-25 Format of the VARSUB statement




          Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   223
VARSUB is defined in the members of the EQQSCLIB library, as specified by the
                EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan
                extend, replan, and Symphony renew batch job JCL.

                Description of the VARSUB parameters
                The following describes the VARSUB parameters:
                    TABLES(GLOBAL|APPL|table1,table2,...)
                    Identifies the variable tables that must be searched and the search order.
                    APPL indicates the application variable table (see the VARIABLE TABLE field
                    in the MCP panel, at Occurrence level). GLOBAL indicates the table defined
                    in the GTABLE keyword of the OPCOPTS controller and BATCHOPT batch
                    options.
                    PREFIX(char|&)
                    A non-alphanumeric character that precedes a variable. It serves the same
                    purpose as the ampersand (&) character that is used in variable substitution
                    in z/OS JCL.
                    BACKPREF(char|%)
                    A non-alphanumeric character that delimits a variable to form simple and
                    compound variables. It serves the same purpose as the percent (%) character
                    that is used in variable substitution in z/OS JCL.
                    VARFAIL(NO|YES)
                    Specifies whether Tivoli Workload Scheduler for z/OS is to issue an error
                    message when a variable substitution error occurs. If you specify NO, the
                    variable string is left unchanged without any translation.
                    TRUNCATE(YES|NO)
                    Specifies whether variables are to be truncated if they are longer than the
                    allowed length. If you specify NO and the keywords are longer than the
                    allowed length, an error message is issued. The allowed length is the length
                    of the keyword for which you use the variable. For example, if you specify a
                    variable of five characters for the JOBWS keyword, the variable is truncated to
                    the first four characters.

                Description of the JOBREC statement
                The JOBREC statement defines the fault-tolerant workstation job properties. You
                must specify JOBREC for each member of the SCRPTLIB. For each job this
                statement specifies the script or the command to run and the user that must run
                the script or command.




224   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Note: JOBREC can be used in combination with a job that is defined with
 centralized script.

Figure 4-26 shows the format of the JOBREC statement.




Figure 4-26 Format of the JOBREC statement

JOBREC is defined in the members of the EQQSCLIB library, as specified by the
EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan
extend, replan, and Symphony renew batch job JCL.

Description of the JOBREC parameters
The following describes the JOBREC parameters:
   JOBSCR(script name)
   Specifies the name of the shell script or executable file to run for the job. The
   maximum length is 4095 characters. If the script includes more than one
   word, it must be enclosed within single or double quotation marks. Do not
   specify this keyword if the job uses a centralized script.
   JOBCMD(command name)
   Specifies the name of the shell command to run the job. The maximum length
   is 4095 characters. If the command includes more than one word, it must be
   enclosed within single or double quotation marks. Do not specify this keyword
   if the job uses a centralized script.
   JOBUSR(user name)
   Specifies the name of the user submitting the specified script or command.
   The maximum length is 47 characters. If you do not specify the user in the
   JOBUSR keyword, the user defined in the CPUUSER keyword of the
   CPUREC statement is used. The CPUREC statement is the one related to
   the workstation on which the specified script or command must run. If the
   user is not specified in the CPUUSER keyword, the tws user is used.
   If the script is centralized, you can also use the job-submit exit (EQQUX001)
   to specify the user name. This user name overrides the value specified in the
   JOBUSR keyword. In turn, the value that is specified in the JOBUSR keyword




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   225
overrides that specified in the CPUUSER keyword of the CPUREC statement.
                    If no user name is specified, the tws user is used.
                    If you use this keyword to specify the name of the user who submits the
                    specified script or command on a Windows fault-tolerant workstation, you
                    must associate this user name to the Windows workstation in the USRREC
                    initialization statement.
                    INTRACTV(YES|NO)
                    Specifies that a Windows job runs interactively on the Windows desktop. This
                    keyword is used only for jobs running on Windows fault-tolerant workstations.
                    RCCONDSUC(“success condition”)
                    An expression that determines the return code (RC) that is required to
                    consider a job as successful. If you do not specify this keyword, the return
                    code equal to zero corresponds to a successful condition. A return code
                    different from zero corresponds to the job abend.
                    The success condition maximum length is 256 characters and the total length
                    of JOBCMD or JOBSCR plus the success condition must be 4086 characters.
                    This is because the TWSRCMAP string is inserted between the success
                    condition and the script or command name. For example, the dir command
                    together with the success condition RC<4 is translated into:
                        dir TWSRCMAP: RC<4
                    The success condition expression can contain a combination of comparison
                    and Boolean expressions:
                    – Comparison expression specifies the job return codes. The syntax is:
                        (RC operator operand), where:
                        •   RC is the RC keyword (type RC).
                        •   operator is the comparison operator. It can have the values shown in
                            Table 4-5.
                Table 4-5 Comparison operators
                  Example            Operator           Description

                  RC < a             <                  Less than

                  RC <= a            <=                 Less than or
                                                        equal to

                  RC> a              >                  Greater than

                  RC >= a            >=                 Greater than
                                                        or equal to

                  RC = a             =                  Equal to



226   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Example            Operator           Description

 RC <> a            <>                 Not equal to

      •    operand is an integer between -2147483647 and 2147483647.
      For example, you can define a successful job as a job that ends with a
      return code less than or equal to 3 as follows:
           RCCONDSUC “(RC <= 3)”
   – Boolean expression specifies a logical combination of comparison
     expressions. The syntax is:
      comparison_expression operator comparison_expression, where:
      •    comparison_expression
           The expression is evaluated from left to right. You can use parentheses
           to assign a priority to the expression evaluation.
      •    operator
           Logical operator. It can have the following values: and, or, not.
      For example, you can define a successful job as a job that ends with a
      return code less than or equal to 3 or with a return code not equal to 5,
      and less than 10 as follows:
           RCCONDSUC “(RC<=3) OR ((RC<>5) AND (RC<10))”


Description of the RECOVERY statement
Scheduler recovery for a job whose status is in error, but whose error code is not
FAIL. To run the recovery, you can specify one or both of the following recovery
actions:
    A recovery job (JOBCMD or JOBSCR keywords)
    A recovery prompt (MESSAGE keyword)

The recovery actions must be followed by one of the recovery options (the
OPTION keyword), stop, continue, or rerun. The default is stop with no recovery
job and no recovery prompt. For more information about recovery in a distributed
network, see Tivoli Workload Scheduler Reference Guide Version 8.2
(Maintenance Release April 2004),SC32-1274.

The RECOVERY statement is ignored if it is used with a job that runs a
centralized script.

Figure 4-27 on page 228 shows the format of the RECOVERY statement.




            Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   227
Figure 4-27 Format of the RECOVERY statement

                RECOVERY is defined in the members of the EQQSCLIB library, as specified by
                the EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the
                plan extend, replan, and Symphony renew batch job JCL.

                Description of the RECOVERY parameters
                The following describes the RECOVERY parameters:
                    OPTION(STOP|CONTINUE|RERUN)
                    Specifies the option that Tivoli Workload Scheduler for z/OS must use when a
                    job abends. For every job, Tivoli Workload Scheduler for z/OS enables you to
                    define a recovery option. You can specify one of the following values:
                    – STOP: Do not continue with the next job. The current job remains in error.
                      You cannot specify this option if you use the MESSAGE recovery action.
                    – CONTINUE: Continue with the next job. The current job status changes to
                      complete in the z/OS interface.
                    – RERUN: Automatically rerun the job (once only). The job status changes
                      to ready, and then to the status of the rerun. Before rerunning the job for a
                      second time, an automatically generated recovery prompt is displayed.
                    MESSAGE(“message’”)
                    Specifies the text of a recovery prompt, enclosed in single or double quotation
                    marks, to be displayed if the job abends. The text can contain up to 64
                    characters. If the text begins with a colon (:), the prompt is displayed, but no
                    reply is required to continue processing. If the text begins with an exclamation
                    mark (!), the prompt is not displayed but a reply is required to proceed. You
                    cannot use the recovery prompt if you specify the recovery STOP option
                    without using a recovery job.




228   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
JOBCMD(command name)
Specifies the name of the shell command to run if the job abends. The
maximum length is 4095 characters. If the command includes more than one
word, it must be enclosed within single or double quotation marks.
JOBSCR(script name)
Specifies the name of the shell script or executable file to be run if the job
abends. The maximum length is 4095 characters. If the script includes more
than one word, it must be enclosed within single or double quotation marks.
JOBUSR(user name)
Specifies the name of the user submitting the recovery job action. The
maximum length is 47 characters. If you do not specify this keyword, the user
defined in the JOBUSR keyword of the JOBREC statement is used.
Otherwise, the user defined in the CPUUSER keyword of the CPUREC
statement is used. The CPUREC statement is the one related to the
workstation on which the recovery job must run. If the user is not specified in
the CPUUSER keyword, the tws user is used.
If you use this keyword to specify the name of the user who runs the recovery
on a Windows fault-tolerant workstation, you must associate this user name to
the Windows workstation in the USRREC initialization statement
JOBWS(workstation name)
Specifies the name of the workstation on which the recovery job or command
is submitted. The maximum length is 4 characters. The workstation must
belong to the same domain as the workstation on which the main job runs. If
you do not specify this keyword, the workstation name of the main job is used.
INTRACTV(YES|NO)
Specifies that the recovery job runs interactively on a Windows desktop. This
keyword is used only for jobs running on Windows fault-tolerant workstations.
RCCONDSUC(“success condition”)
An expression that determines the return code (RC) that is required to
consider a recovery job as successful. If you do not specify this keyword, the
return code equal to zero corresponds to a successful condition. A return
code different from zero corresponds to the job abend.
The success condition maximum length is 256 characters and the total length
of the JOBCMD or JOBSCR plus the success condition must be 4086
characters. This is because the TWSRCMAP string is inserted between the
success condition and the script or command name. For example, the dir
command together with the success condition RC<4 is translated into:
   dir TWSRCMAP: RC<4




       Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   229
The success condition expression can contain a combination of comparison
                    and Boolean expressions:
                    – Comparison expression Specifies the job return codes. The syntax is:
                        (RC operator operand)
                        where:
                        •   RC is the RC keyword (type RC).
                        •   operator is the comparison operator. It can have the values in
                            Table 4-6.
                Table 4-6 Operator comparison operator values
                  Example            Operator           Description

                  RC < a             <                  Less than

                  RC <= a            <=                 Less than or
                                                        equal to

                  RC> a              >                  Greater than

                  RC >= a            >=                 Greater than
                                                        or equal to

                  RC = a             =                  Equal to

                  RC <> a            <>                 Not equal to

                        •   operand is an integer between -2147483647 and 2147483647.
                        For example, you can define a successful job as a job that ends with a
                        return code less than or equal to 3 as follows:
                            RCCONDSUC “(RC <= 3)”
                    – Boolean expression: Specifies a logical combination of comparison
                      expressions. The syntax is:
                        comparison_expression operator comparison_expression
                        where:
                        •   comparison_expression The expression is evaluated from left to right.
                            You can use parentheses to assign a priority to the expression
                            evaluation.
                        •   operator Logical operator (it could be either: and, or, not).
                        For example, you can define a successful job as a job that ends with a
                        return code less than or equal to 3 or with a return code not equal to 5,
                        and less than 10 as follows:
                            RCCONDSUC “(RC<=3) OR ((RC<>5) AND (RC<10))”



230   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Example VARSUB, JOBREC, and RECOVERY
For the test of VARSUB, JOBREC, and RECOVERY, we used the
non-centralized script member as shown in Example 4-20.
Example 4-20 Non-centralized AIX script with VARSUB, JOBREC, and RECOVERY
EDIT       TWS.V8R20.SCRPTLIB(F100DJ02) - 01.05             Columns 00001 00072
Command ===>                                                   Scroll ===> CSR
****** ***************************** Top of Data ******************************
000001 /* Definition for job with "non-centralized" script                    */
000002 /* ------------------------------------------------                    */
000003 /* VARSUB - to manage JCL variable substitution                        */
000004 VARSUB
000005      TABLES(E2EVAR)
000006      PREFIX('&')
000007      BACKPREF('%')
000008      VARFAIL(YES)
000009      TRUNCATE(YES)
000010 /* JOBREC - to define script, user and some other specifications       */
000011 JOBREC
000012      JOBCMD('rm &TWSHOME/demo.sh')
000013      JOBUSR ('%TWSUSER')
000014 /* RECOVERY - to define what FTA should do in case of error in job     */
000015 RECOVERY
000016      OPTION(RERUN)                       /* Rerun the job after recover*/
000017      JOBCMD('touch &TWSHOME/demo.sh')    /* Recover job                */
000018      JOBUSR('&TWSUSER')                  /* User for recover job       */
000019      MESSAGE ('Create demo.sh on FTA?') /* Prompt message              */
****** **************************** Bottom of Data ****************************


The member F100DJ02 in Example 4-20 was created in the SCRPTLIB
(EQQSCLIB) partitioned data set. In the non-centralized script F100DJ02, we
use VARSUB to specify how we want Tivoli Workload Scheduler for z/OS to scan
for JCL variables and substitute JCL variables. The JOBREC parameters specify
that we will run the UNIX (AIX) rm command for a file named demo.sh.

If the file does not exist (it does not exist the first time the script is run) we run the
recovery command (touch) that will create the missing file. So we can rerun
(OPTION(RERUN)) the JOBREC JOBCMD() without any errors.

Before the job is rerun, an operator have to reply yes to the prompt message:
Create demo.sh on FTA?

Example 4-21 on page 232 shows another example. The job will be marked
complete if return code from the script is less than 16 and different from 8 or
equal to 20.




            Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   231
Example 4-21 Non-centralized script definition with RCCONDSUC parameter
                EDIT       TWS.V8R20.SCRPTLIB(F100DJ03) - 01.01            Columns 00001 00072
                Command ===>                                                  Scroll ===> CSR
                ****** ***************************** Top of Data ******************************
                000001 /* Definition for job with "distributed" script                       */
                000002 /* --------------------------------------------                       */
                000003 /* VARSUB - to manage JCL variable substitution                       */
                000004 VARSUB
                000005        TABLES(IBMGLOBAL)
                000006        PREFIX(%)
                000007        VARFAIL(YES)
                000008        TRUNCATE(NO)
                000009 /* JOBREC - to define script, user and some other specifications      */
                000010 JOBREC
                000011        JOBSCR('/tivoli/tws/scripts/rc_rc.sh 12')
                000012        JOBUSR(%DISTUID.)
                000013        RCCONDSUC('((RC<16) AND (RC<>8)) OR (RC=20)')


                  Important: Be careful with lowercase and uppercase. In Example 4-21, it is
                  important that the variable name DISTUID is typed with capital letters because
                  Tivoli Workload Scheduler for z/OS JCL variable names are always
                  uppercase. On the other hand, it is important that the value for the DISTUID
                  variable is defined in Tivoli Workload Scheduler for z/OS variable table
                  IBMGLOBAL with lowercase letters, because the user ID is defined on the
                  UNIX system with lowercase letters.

                  Remember to type with CAPS OFF when editing members in SCRPTLIB
                  (EQQSCLIB) for jobs with non-centralized script and members in Tivoli
                  Workload Scheduler for z/OS JOBLIB (EQQJBLIB) for jobs with centralized
                  script.


4.5.4 Combination of centralized script and VARSUB, JOBREC
parameters
                Sometimes it can be necessary to create a member in the EQQSCLIB (normally
                used for non-centralized script definitions) for a job that is defined in Tivoli
                Workload Scheduler for z/OS with centralized script.




232   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
This can be the case if:
   The RCCONDSUC parameter will be used for the job to accept specific return
   codes or return code ranges.

    Note: You cannot use Tivoli Workload Scheduler for z/OS highest return
    code for fault-tolerant workstation jobs. You have to use the RCCONDSUC
    parameter.

   A special user should be assigned to the job with the JOBUSR parameter.
   Tivoli Workload Scheduler for z/OS JCL variables should be used in the
   JOBUSR() or the RCCONDSUC() parameters (for example).

Remember that the RECOVERY statement cannot be specified in EQQSCLIB for
jobs with centralized script. (It will be ignored.)

To make this combination, you simply:
1. Create the centralized script in Tivoli Workload Scheduler for z/OS JOBLIB.
   The member name should be the same as the job name defined for the
   operation (job) in the Tivoli Workload Scheduler for z/OS job stream
   (application).
2. Create the corresponding member in the EQQSCLIB. The member name
   should be the same as the member name for the job in the JOBLIB.

For example:

We have a job with centralized script. In the job we should accept return codes
less than 7 and the job should run with user dbprod.

To accomplish this, we define the centralized script in Tivoli Workload Scheduler
for z/OS the same way as shown in Example 4-17 on page 220. Next, we create
a member in the EQQSCLIB with the same name as the member name used for
the centralized script.

This member should only contain the JOBREC RCCONDSUC() and JOBUSR()
parameters (Example 4-22).
Example 4-22 EQQSCLIB (SCRIPTLIB) definition for job with centralized script
EDIT       TWS.V8R20.SCRPTLIB(F100CJ02) - 01.05            Columns 00001 00072
Command ===>                                                  Scroll ===> CSR
****** ***************************** Top of Data ******************************
000001 JOBREC
000002        RCCONDSUC('RC<7')
000003        JOBUSR(dbprod)




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   233
****** **************************** Bottom of Data ****************************



4.5.5 Definition of FTW jobs and job streams in the controller
                When the script is defined either as centralized in the Tivoli Workload Scheduler
                for z/OS job library (JOBLIB) or as non-centralized in the Tivoli Workload
                Scheduler for z/OS script library (EQQSCLIB), you can define some job streams
                (applications) to run the defined scripts.

                Definition of job streams (applications) for fault-tolerant workstation jobs is done
                exactly the same way as normal mainframe job streams: The job is defined in the
                job stream, and dependencies are added (predecessor jobs, time dependencies,
                special resources). Optionally, a run cycle can be added to run the job stream fat
                a set time.

                When the job stream is defined, the fault-tolerant workstation jobs can be
                executed and the final verification test can be performed.

                Figure 4-28 shows an example of a job stream that is used to test the end-to-end
                scheduling environment. There are four distributed jobs (seen in the left window
                in the figure) and these jobs will run on workdays (seen in the right window).




                Figure 4-28 Example of a job stream used to test end-to-end scheduling




234   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
It is not necessary to create a run cycle for job streams to test the FTW jobs, as
         they can be added manually to the plan in Tivoli Workload Scheduler for z/OS.



4.6 Verification test of end-to-end scheduling
         At this point we have:
            Installed and configured the Tivoli Workload Scheduler for z/OS controller for
            end-to-end scheduling
            Installed and configured the Tivoli Workload Scheduler for z/OS end-to-end
            server
            Defined the network topology for the distributed Tivoli Workload Scheduler
            network in the end-to-end server and plan batch jobs
            Installed and configured Tivoli Workload Scheduler on the servers in the
            network for end-to-end scheduling
            Defined fault-tolerant workstations and activated these workstations in the
            Tivoli Workload Scheduler for z/OS network
            Verified that the plan program executed successfully with the end-to-end
            topology statements
            Created members with centralized script and non-centralized scripts
            Created job streams containing jobs with centralized and non-centralized
            scripts

         It is time to perform the final verification test of end-to-end scheduling. This test
         verifies that:
            Jobs with centralized script definitions can be executed on the FTWs, and the
            job log can be browsed for these jobs.
            Jobs with non-centralized script definitions can be executed on the FTWs,
            and the job log can be browsed for these jobs.
            Jobs with a combination of centralized and non-centralized script definitions
            can be executed on the FTWs, and the job log can be browsed for these jobs.

         The verification can be performed in several ways. Because we would like to
         verify that our end-to-end environment is working and that it is possible to run
         jobs on the FTWs, we have focused on this verification.

         We used the Job Scheduling Console in combination with legacy Tivoli Workload
         Scheduler for z/OS ISPF panels for the verifications. Of course, it is possible to
         perform the complete verification only with the legacy ISPF panels.




                    Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   235
Finally, if you decide to use only centralized scripts or non-centralized scripts, you
                do not have to verify both cases.


4.6.1 Verification of job with centralized script definitions
                Add a job stream with a job defined with centralized script. The job from
                Example 4-17 on page 220 is used in this example.

                Before the job was submitted, the JCL (script) was edited and the parameter on
                the rmstdlist program was changed from 10 to 1 (Figure 4-29).




                Figure 4-29 Edit JCL for centralized script, rmstdlist parameter changed from 10 to 1

                The job is submitted, and it is verified that the job completes successfully on the
                FTA. Output is verified by doing browse job log. Figure 4-30 on page 237 shows
                only the first part of the job log. See the complete job log in Example 4-23 on
                page 237.

                From the job log, you can see that the centralized script that was defined in the
                controller JOBLIB is copied to (see the line with the = JCLFILE text):
                    /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_0
                    05_F100CENTHOUSEK.sh




236   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
The Tivoli Workload Scheduler for z/OS JCL variable &ODMY1 in the “echo” line
(Figure 4-29) has been substituted by the Tivoli Workload Scheduler for z/OS
controller with the job stream planning date (for our case, 210704, seen in
Example 4-23 on page 237).




Figure 4-30 Browse first part of job log for the centralized script job in JSC

Example 4-23 The complete job log for the centralized script job
===============================================================
= JOB       : OPCMASTER#BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK
= USER      : twstest
= JCLFILE   :
/tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_0
05_F100CENTHOUSEK.sh
= Job Number: 52754
= Wed 07/21/04 21:52:39 DFT
===============================================================
TWS for UNIX/JOBMANRC 8.2
AWSBJA001I Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2003
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc
/tivoli/tws/twstest/tws/cen
tralized/OPCMASTER.BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK.sh



            Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   237
TWS for UNIX (AIX)/JOBINFO 8.2 (9.5)
                Licensed Materials Property of IBM
                5698-WKB
                (C) Copyright IBM Corp 1998,2001
                US Government User Restricted Rights
                Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
                Corp.
                Installed for user ''.
                Locale LANG set to "C"
                Now we are running the script
                /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8C
                FD2B8A25EC41.J_005_F100CENTHOUSEK.sh
                OPC occurrence plan date is: 210704
                TWS for UNIX/RMSTDLIST 8.2
                AWSBJA001I Licensed Materials Property of IBM
                5698-WKB
                (C) Copyright IBM Corp 1998,2003
                US Government User Restricted Rights
                Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
                Corp.
                AWSBIS324I Will list directories older than -1
                /tivoli/tws/twstest/tws/stdlist/2004.07.13
                /tivoli/tws/twstest/tws/stdlist/2004.07.14
                /tivoli/tws/twstest/tws/stdlist/2004.07.15
                /tivoli/tws/twstest/tws/stdlist/2004.07.16
                /tivoli/tws/twstest/tws/stdlist/2004.07.18
                /tivoli/tws/twstest/tws/stdlist/2004.07.19
                /tivoli/tws/twstest/tws/stdlist/logs/20040713_NETMAN.log
                /tivoli/tws/twstest/tws/stdlist/logs/20040713_TWSMERGE.log
                /tivoli/tws/twstest/tws/stdlist/logs/20040714_NETMAN.log
                /tivoli/tws/twstest/tws/stdlist/logs/20040714_TWSMERGE.log
                /tivoli/tws/twstest/tws/stdlist/logs/20040715_NETMAN.log
                /tivoli/tws/twstest/tws/stdlist/logs/20040715_TWSMERGE.log
                /tivoli/tws/twstest/tws/stdlist/logs/20040716_NETMAN.log
                /tivoli/tws/twstest/tws/stdlist/logs/20040716_TWSMERGE.log
                /tivoli/tws/twstest/tws/stdlist/logs/20040718_NETMAN.log
                /tivoli/tws/twstest/tws/stdlist/logs/20040718_TWSMERGE.log
                ===============================================================
                = Exit Status           : 0
                = System Time (Seconds) : 1     Elapsed Time (Minutes) : 0
                = User Time (Seconds)   : 0
                = Wed 07/21/04 21:52:40 DFT
                ===============================================================


                This completes the verification of centralized script.




238   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
4.6.2 Verification of job with non-centralized scripts
                     Add a job stream with a job defined with non-centralized script. Our example
                     uses the non-centralized job script from Example 4-21 on page 232.

                     The job is submitted, and it is verified that the job ends in error. (Remember that
                     the JOBCMD will try to remove a non-existing file.)

                     Reply to the prompt with Yes, and the recovery job is executed (Figure 4-31).




 1




     • The job ends in error with RC=0002.
     • Right-click the job to open a context menu (1).
     • In the context menu, select Recovery Info to
        open the Job Instance Recovery Information
        window.
     • The recovery message is shown and you can
       reply to the prompt by clicking the Reply to
       Prompt arrow.
     • Select Yes and click OK to run the recovery
       job and rerun the failed F100DJ02 job (if the
       recovery job ends successfully).

Figure 4-31 Running F100DJ02 job with non-centralized script and RECOVERY options

                     The same process can be performed in Tivoli Workload Scheduler for z/OS
                     legacy ISPF panels.

                     When the job ends in error, type RI (for Recovery Info) for the job in the Tivoli
                     Workload Scheduler for z/OS Error list to get the panel shown in Figure 4-32 on
                     page 240.




                                   Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   239
Figure 4-32 Recovery Info ISPF panel in Tivoli Workload Scheduler for z/OS

                To reply Yes to the prompt, type PY in the Option field.

                Then press Enter several times to see the result of the recovery job in the same
                panel. The Recovery job info fields will be updated with information for Recovery
                jobid, Duration, and so on (Figure 4-33).




                Figure 4-33 Recovery Info after the Recovery job has been executed.

                The recovery job has been executed successfully and the Recovery Option
                (Figure 4-32) was rerun, so the failing job (F100DJ02) will be rerun and will
                complete successfully.

                Finally, the job log is browsed for the completed F100DJ02 job (Example 4-24 on
                page 241). The job log shows that the user is twstest ( = USER) and that the
                twshome directory is /tivoli/tws/twstest/tws (part of the = JCLFILE line).




240   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Example 4-24 The job log for the second run of F100DJ02 (after the RECOVERY job)
===============================================================
= JOB       : OPCMASTER#BB8D04BFE71A3901.J_010_F100DECSCRIPT01
= USER      : twstest
= JCLFILE : rm /tivoli/tws/twstest/tws/demo.sh
= Job Number: 24100
= Wed 07/21/04 22:46:33 DFT
===============================================================
TWS for UNIX/JOBMANRC 8.2
AWSBJA001I Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2003
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc rm
TWS for UNIX (AIX)/JOBINFO 8.2 (9.5)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for user ''.
Locale LANG set to "C"
Now we are running the script rm /tivoli/tws/twstest/tws/demo.sh
===============================================================
= Exit Status           : 0
= System Time (Seconds) : 0     Elapsed Time (Minutes) : 0
= User Time (Seconds)   : 0
= Wed 07/21/04 22:46:33 DFT
===============================================================


If you compare the job log output with the non-centralized script definition in
Example 4-21 on page 232, you see that the user and the twshome directory
were defined as Tivoli Workload Scheduler for z/OS JCL variables (&TWSHOME
and %TWSUSER). These variables have been substituted with values from the
Tivoli Workload Scheduler for z/OS variable table E2EVAR (specified in the
VARSUB TABLES() parameter).

This variable substitution is performed when the job definition is added to the
Symphony file either during normal Tivoli Workload Scheduler for z/OS plan
extension or replan or if user ad hoc adds the job stream to the plan in Tivoli
Workload Scheduler for z/OS.

This completes the test of non-centralized script.



           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   241
4.6.3 Verification of centralized script with JOBREC parameters
                 We did a verification with a job with centralized script combined with a JOBREC
                 statement in the script library (EQQSCLIB).

                 The verification uses a job named F100CJ02 and centralized script, as shown in
                 Example 4-25. The centralized script is defined in the Tivoli Workload Scheduler
                 for z/OS JOBLIB.
Example 4-25 Centralized script for test in combination with JOBREC
EDIT       TWS.V8R20.JOBLIB(F100CJ02) - 01.07              Columns 00001 00072
 Command ===>                                                  Scroll ===> CSR
 ****** ***************************** Top of Data ******************************
 000001 //*%OPC SCAN
 000002 //* OPC Here is an OPC JCL Variable OYMD1: &OYMD1.
 000003 //* OPC
 000004 //*%OPC RECOVER JOBCODE=(12),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO)
 000005 //* OPC
 000006 echo 'Todays OPC date is: &OYMD1'
 000007 echo 'Unix system date is: '
 000008 date
 000009 echo 'OPC schedule time is: ' &CHHMMSSX
 000010 exit 12
 ****** **************************** Bottom of Data ****************************


                 The JOBREC statement for the F100CJ02 job is defined in the Tivoli Workload
                 Scheduler for z/OS scriptlib (EQQSCLIB); see Example 4-26. It is important that
                 the member name for the job (F100CJ02 in our example) is the same in JOBLIB
                 and SCRPTLIB.
Example 4-26 JOBREC definition for the F100CJ02 job
EDIT       TWS.V8R20.SCRPTLIB(F100CJ02) - 01.07            Columns 00001 00072
Command ===>                                                  Scroll ===> CSR
****** ***************************** Top of Data ******************************
000001 JOBREC
000002        RCCONDSUC('RC<7')
000003        JOBUSR(maestro)
****** **************************** Bottom of Data ****************************


                 The first time the job is run, it abends with return code 12 (due to the exit 12 line
                 in the centralized script).

                 Example 4-27 on page 243 shows the job log. Note the “= JCLFILE” line. Here
                 you can see TWSRCMAP: RC<7, which is added because we specified
                 RCCONDSUC(‘RC<7’) in the JOBREC definition for the F100CJ02 job.




242    End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Example 4-27 Job log for the F100CJ02 job (ends with return code 12)
===============================================================
= JOB       : OPCMASTER#BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01
= USER      : maestro
= JCLFILE   :
/tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_0
20_F100CENTSCRIPT01.sh TWSRCMAP: RC<7
= Job Number: 56624
= Wed 07/21/04 23:07:16 DFT
===============================================================
TWS for UNIX/JOBMANRC 8.2
AWSBJA001I Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2003
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc
/tivoli/tws/twstest/tws/cen
tralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.sh
TWS for UNIX (AIX)/JOBINFO 8.2 (9.5)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for user ''.
Locale LANG set to "C"
Todays OPC date is: 040721
Unix system date is:
Wed Jul 21 23:07:17 DFT 2004
OPC schedule time is: 23021516
===============================================================
= Exit Status           : 12
= System Time (Seconds) : 0     Elapsed Time (Minutes) : 0
= User Time (Seconds)   : 0
= Wed 07/21/04 23:07:17 DFT
===============================================================


The job log also shows that the user is set to maestro (the = USER line). This is
because we specified JOBUSR(maestro) in the JOBREC statement.

Next, before the job is rerun, the JCL (the centralized script) is edited, and the
last line is changed from exit 12 to exit 6. Example 4-28 on page 244 shows the
edited JCL.




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   243
Example 4-28 The script (JCL) for the F100CJ02 job is edited exit changed to 6
                ******   ***************************** Top of Data ******************************
                000001   //*>OPC SCAN
                000002   //* OPC Here is an OPC JCL Variable OYMD1: 040721
                000003   //* OPC
                000004   //*>OPC RECOVER JOBCODE=(12),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO)
                000005   //* OPC MSG:
                000006   //* OPC MSG: I *** R E C O V E R Y A C T I O N S       T A K E N ***
                000007   //* OPC
                000008   echo 'Todays OPC date is: 040721'
                000009   echo
                000010   echo 'Unix system date is: '
                000011   date
                000012   echo
                000013   echo 'OPC schedule time is: ' 23021516
                000014   echo
                000015   exit 6
                ******   **************************** Bottom of Data ****************************


                Note that the line with Tivoli Workload Scheduler for z/OS Automatic Recover
                has changed: The % sign has been replaced by the > sign. This means that Tivoli
                Workload Scheduler for z/OS has performed the recovery action by adding the
                F100CENTRECAPPL job stream (application).

                The result after the edit and rerun of the job is that the job completes
                successfully. (It is marked as completed with return code = 0 in Tivoli Workload
                Scheduler for z/OS). The RCCONDSUC() parameter in the scriptlib defintion for
                the F100CJ02 job sets the job to successful even though the exit code from the
                script was 6 (Example 4-29).
                Example 4-29 Job log for the F100CJ02 job with script exit code = 6
                ===============================================================
                = JOB       : OPCMASTER#BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01
                = USER      : maestro
                = JCLFILE   :
                /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_0
                20_F100CENTSCRIPT01.sh TWSRCMAP: RC<7
                = Job Number: 41410
                = Wed 07/21/04 23:35:48 DFT
                ===============================================================
                TWS for UNIX/JOBMANRC 8.2
                AWSBJA001I Licensed Materials Property of IBM
                5698-WKB
                (C) Copyright IBM Corp 1998,2003
                US Government User Restricted Rights
                Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
                Corp.



244   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc
         /tivoli/tws/twstest/tws/cen
         tralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.sh
         TWS for UNIX (AIX)/JOBINFO 8.2 (9.5)
         Licensed Materials Property of IBM
         5698-WKB
         (C) Copyright IBM Corp 1998,2001
         US Government User Restricted Rights
         Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
         Corp.
         Installed for user ''.
         Locale LANG set to "C"
         Todays OPC date is: 040721
         Unix system date is:
         Wed Jul 21 23:35:49 DFT 2004
         OPC schedule time is: 23021516
         ===============================================================
         = Exit Status           : 6
         = System Time (Seconds) : 0     Elapsed Time (Minutes) : 0
         = User Time (Seconds)   : 0
         = Wed 07/21/04 23:35:49 DFT
         ===============================================================


         This completes the verification of centralized script combined with JOBREC
         statements.



4.7 Activate support for the Tivoli Workload Scheduler
Job Scheduling Console
         To activate support for use of the Tivoli Workload Scheduler Job Scheduling
         Console (JSC), perform the following steps:
         1. Install and start a Tivoli Workload Scheduler for z/OS JSC server on
            mainframe.
         2. Install Tivoli Management Framework 4.1 or 3.7.1.
         3. Install Job Scheduling Services in Tivoli Management Framework.
         4. To be able to work with Tivoli Workload Scheduler for z/OS (OPC) controllers
            from the JSC:
            a. Install the Tivoli Workload Scheduler for z/OS connector in Tivoli
               Management Framework.
            b. Create instances in Tivoli Management Framework that point to the Tivoli
               Workload Scheduler for z/OS controllers you want to access from the JSC.



                   Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   245
5. To be able to work with Tivoli Workload Scheduler domain managers or
                   fault-tolerant agents from the JSC:
                    a. Install the Tivoli Workload Scheduler connector in Tivoli Management
                       Framework. Note that the Tivoli Management Framework server or
                       managed node must be installed on the machine where the Tivoli
                       Workload Scheduler instance is installed.
                    b. Create instances in Tivoli Management Framework that point to the Tivoli
                       Workload Scheduler domain managers or fault-tolerant agents that you
                       would like to access from the JSC.
                6. Install the JSC on the workstations where it should be used.

                The following sections describe installation steps in more detail.


4.7.1 Install and start Tivoli Workload Scheduler for z/OS JSC server
                To use Tivoli Workload Scheduler Job Scheduling Console for communication
                with Tivoli Workload Scheduler for z/OS, you must initialize the Tivoli Workload
                Scheduler for z/OS connector. The connector forms the bridge between the Tivoli
                Workload Scheduler Job Scheduling Console and the Tivoli Workload Scheduler
                for z/OS product.

                The JSC communicates with Tivoli Workload Scheduler for z/OS through the
                scheduler server using the TCP/IP protocol. The JSC needs the server to run as
                a started task in a separate address space. The Tivoli Workload Scheduler for
                z/OS server communicates with Tivoli Workload Scheduler for z/OS and passes
                the data and return codes back to the connector.

                The security model that is implemented for Tivoli Workload Scheduler Job
                Scheduling Console is similar to that already implemented by other Tivoli
                products that have been ported to z/OS (namely IBM Tivoli User Administration
                and IBM Tivoli Security Management). The Tivoli Framework security handles
                the initial user verification, but it is necessary to obtain a valid corresponding
                RACF user ID to be able to work with the security environment in z/OS.

                Even though it is possible to have one server started task handling end-to-end
                scheduling, JSC communication, and even APPC communication, we
                recommend having a server started task dedicated to JSC communication
                (SERVOPTS PROTOCOL(JSC)). This has the advantage that you do not have to
                stop the whole end-to-end server process if only the JSC communication has be
                restarted.

                We will install a server dedicated to JSC communication and call it the JSC
                server.




246   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
When JSC is used to access the Tivoli Workload Scheduler for z/OS controller
through the JSC server, the JSC server uses the Tivoli Workload Scheduler for
z/OS program interface (PIF) to interface with the controller.

You can find an example of the started task procedure in installation member
EQQSER in the sample library that is generated by the EQQJOBS installation
aid. An example of the initialization statements can be found in the EQQSERP
member in the sample library generated by the EQQJOBS installation aid. After
the installation of the JSC server, you can get almost the same functionality from
the JSC as you have with the legacy Tivoli Workload Scheduler for z/OS IPSF
interface.

Configure and start the JSC server and verify the start
First, create the started task procedure for the JSC server. The EQQSER
member in the sample library can be used. Take the following into consideration
when customizing the EQQSER sample:
   Make sure that the C runtime library is concatenated in the server JCL
   (CEE.SCEERUN) in STEPLLIB, if it is not in the LINKLIST.
   If you have multiple TCP/IP stacks or if the name of the procedure that was
   used to start the TCPIP address space is different from TCPIP, introduce the
   SYSTCPD DD card pointing to a data set containing the TCPIPJOBNAME
   parameter. (See DD SYSTCPD in the TCP/IP manuals.)
   Customize the JSC server initialization parameters file. (See the EQQPARM
   DD statement in the server JCL.) The installation member EQQSERP already
   contains a template.

For information about the JSC server parameters, refer to IBM Tivoli Workload
Scheduler for z/OS Customization and Tuning, SC32-1265.

We used the JSC server initialization parameters shown in Example 4-30. Also
see Figure 4-34 on page 249.
Example 4-30 The JSC server initialization parameter
/**********************************************************************/
/* SERVOPTS: run-time options for the TWSCJSC started task            */
/**********************************************************************/
SERVOPTS SUBSYS(TWSC)
/*--------------------------------------------------------------------*/
/* TCP/IP server is needed for JSC GUI usage. Protocol=JSC            */
/*--------------------------------------------------------------------*/
           PROTOCOL(JSC)                  /* This server is for JSC   */
           JSCHOSTNAME(TWSCJSC)           /* DNS name for JSC         */
           USERMAP(USERS)                 /* RACF user / TMF adm. map */
           PORTNUMBER(38888)              /* Portnumber for JSC comm. */
           CODEPAGE(IBM-037)              /* Codep. EBCIDIC/ASCII tr. */


           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   247
/*--------------------------------------------------------------------*/
                /* CALENDAR parameter is mandatory for server when using TCP/IP       */
                /* server.                                                            */
                /*--------------------------------------------------------------------*/
                INIT     ADOICHK(YES)                     /* ADOI Check ON            */
                         CALENDAR(DEFAULT)                /* Use DEFAULT calendar     */
                         HIGHDATE(711231)                 /* Default HIGHDATE         */


                The SUBSYS(), PROTOCOL(JSC), CALENDAR(), and HIGHDATE() are
                mandatory for using the Tivoli Job Scheduling Console. Make sure that the port
                you try to use is not reserved by another application.

                If JSCHOSTNAME() is not specified, the default is to use the host name that is
                returned by the operating system.

                  Note: We got an error when trying to use the JSCHOSTNAME with a host
                  name instead of an IP address (EQQPH18E COMMUNICATION FAILED).
                  This problem is fixed with APAR PQ83670.

                Remember that you always have to define OMVS segments for Tivoli Workload
                Scheduler for z/OS server started tasks userids.

                Optionally, the JSC server started task name can be defined in the Tivoli
                Workload Scheduler for z/OS controller OPCOPTS SERVERS() parameter to let
                the controller start and stop the JSC server task when the controller itself is
                started and stopped (Figure 4-34 on page 249).




248   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Note: It is possible to run many
 TWSC                              servers, but only one server can be      (CPE, LTPE, etc.)
  OPCOPTS                          the end-to-end server (also called the   BATCHOPT
  TPLGYSRV(TWSCE2E)                topology server). Specify this server    ...
                                   using the TPLGYSRV controller
  SERVERS(TWSCJSC,TWSCE2E)         option. The SERVERS option
                                                                            TPLGYPRM(TPLGPARM)
  ...                              specifies the servers that will be       ...
                                   started when the controller starts.

  JSC Server                     End-to-end Server
  TWSCJSC                        TWSCE2E
                                                                             Topology Records
  SERVOPTS                       SERVOPTS
  SUBSYS(TWSC)                   SUBSYS(TWSC)
                                                                               EQQPARM(TPLGINFO)
  PROTOCOL(JSC)                  PROTOCOL(E2E)                               DOMREC     ...
  CODEPAGE(IBM-037)              TPLGYPRM(TPLGPARM)                          DOMREC     ...
  JSCHOSTNAME(TWSCJSC)           ...                                         CPUREC     ...
  PORTNUMBER(38888)                                                          CPUREC     ...
                                 Topology Parameters
  USERMAP(USERS)                                                             CPUREC     ...
  ...                                 EQQPARM(TPLGPARM)                      CPUREC     ...
 User Map                        TOPOLOGY                                    ...
                                   BINDIR(/tws)
   EQQPARM(USERS)                  WRKDIR(/tws/wrkdir)                         User Records
  USER 'ROOT@M-REGION'             HOSTNAME(TWSC.IBM.COM)                        EQQPARM(USRINFO)
    RACFUSER(TMF)                  PORTNUMBER(31182)                           USRREC     ...
    RACFGROUP(TIVOLI)              TPLGYMEM(TPLGINFO)                          USRREC     ...
  ...                              USRMEM(USERINFO)                            USRREC     ...
                                   TRCDAYS(30)                                 ...
                                   LOGLINES(100)

Figure 4-34 JSC Server that communicates with TWSC controller

                After the configuration and customization of the JSC server initialization
                statements and the JSC server started task procedure, we started the JSC
                server and saw the messages in Example 4-31 during start.
                Example 4-31 Messages in EQQMLOG for JSC server when started
                EQQZ005I   OPC SUBTASK SERVER            IS BEING STARTED
                EQQPH09I   THE SERVER IS USING THE TCP/IP PROTOCOL
                EQQPH28I   THE TCP/IP STACK IS AVAILABLE
                EQQPH37I   SERVER CAN RECEIVE JSC REQUESTS
                EQQPH00I   SERVER TASK HAS STARTED


                Controlling access to Tivoli Workload Scheduler for z/OS from
                the JSC
                The Tivoli Framework performs a security check, verifying the user ID and
                password, when a user tries to use the Job Scheduling Console. The Tivoli
                Framework associates each user ID and password with an administrator. Tivoli
                Workload Scheduler for z/OS resources are protected by RACF.


                            Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   249
The JSC user should have to enter only a single user ID - password combination,
                not at both the Tivoli Framework level and then again at the Tivoli Workload
                Scheduler for z/OS level).

                The security model is based on having the Tivoli Framework security handle the
                initial user verification while obtaining a valid corresponding RACF user ID. This
                makes it possible for the user to work with the security environment in z/OS. The
                z/OS security is based on a table mapping the Tivoli Framework administrator to
                an RACF user ID. When a Tivoli Framework user tries to initiate an action on
                z/OS, the Tivoli administrator ID is used as a key to obtain the corresponding
                RACF user ID.

                The JSC server uses the RACF user ID to build the RACF environment to access
                Tivoli Workload Scheduler for z/OS services, so the Tivoli Administrator must
                relate, or map, to a corresponding RACF user ID.

                There are two ways of getting the RACF user ID:
                    The first way is by using the RACF Tivoli-supplied predefined resource class,
                    TMEADMIN.
                    Consult the section about implementing security in Tivoli Workload Scheduler
                    for z/OS in IBM Tivoli Workload Scheduler for z/OS Customization and
                    Tuning, SC32-1265, for the complete setup of the TMEADMIN RACF class.
                    The other way is to use a new OPC Server Initialization Parameter to define a
                    member in the file identified by the EQQPARM DD statement in the server
                    startup job.
                    This member contains all of the associations for a TME user with an RACF
                    user ID. You should set the parameter USERMAP in the JSC server
                    SERVOPTS Initialization Parameter to define the member name.

                Use of the USERMAP(USERS)
                We used the JSC server SERVOPTS USERMAP(USERS) parameter to define
                the mapping between Tivoli Framework Administrators and z/OS RACF users.

                USERMAP(USERS) means that the definitions (mappings) are defined in a
                member named USERS in the EQQPARM library. See Figure 4-35 on page 251.




250   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
OPCMASTER
                                                 USER entries in the USERMAP(USERS) member
                   RACF
                                                                 EQQPARM(USERS)
                                             USER 'ROOT@M-REGION' RACFUSER(TMF) RACFGROUP(TIVOLI)
                    JSC
                   Server     USERMAP
                                             USER ‘MIKE@M-REGION' RACFUSER(MAL) RACFGROUP(TIVOLI)
                                             USER ‘FINN@M-REGION' RACFUSER(FBK) RACFGROUP(TIVOLI)
  A000
                    OPC                      USER ‘STEFAN@M-REGION' RACFUSER(SF) RACFGROUP(TIVOLI)
                  Connector                  ...
                    Tivoli
                 Framework


                                                         • When a JSC user connects to the computer
                                                  Job
                                               Scheduling
                                                           running the OPC connector, the user is
                                                Console
         Other DMs and FTAs
                                                           identified as a local TMF administrator.
                                        •   When the user attempts to view or modify the OPC
                                            databases or plan, the JSC Server task uses RACF to
                                            determine whether to authorize the action
                                        •   If the USERMAP option is specified in the SERVOPTS of
                                            the JSC Server task, the JSC Server uses this map to
                                            associate TMF administrators with RACF users
                                        •   It is also possible to activate the TMEADMIN RACF class
                                            and add the TMF administrator names directly in there
                                        •   For auditing purposes, it is recommended that one TMF
                                            administrator be defined for each RACF user
Figure 4-35 The relation between TMF administrators and RACF users via USERMAP

                     For example, in the definitions in the USERS member in EQQPARM in
                     Figure 4-35, TMF administrator MIKE@M-REGION is mapped to RACF user MAL
                     (MAL is member of RACF group TIVOLI). If MIKE logs in to the TMF with name
                     M-REGION and MIKE@M-REGION works with the Tivoli Workload Scheduler for
                     z/OS controller from the JSC, he will have the access defined for RACF user
                     MAL. In other words, the USER defintion maps TMF Administrator
                     MIKE@M-REGION to RACF user MAL.

                     Whatever MIKE@M-REGION is doing from the JSC in the controller will be
                     performed with the RACF authorization defined for the MAL user. All logging in
                     RACF will also be done for the MAL user.

                     The TMF Administrator is defined in TMF with a certain authorization level. The
                     TMF Administrator must have the USER role to be able to use the Tivoli
                     Workload Scheduler for z/OS connector.




                                   Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   251
Notes:
                      If you decide to use the USERMAP to map TMF administrators to RACF
                      users, you should be aware that users with update access to the member
                      with the mapping definitions (the USERS member in our example) can get
                      access to the Tivoli Workload Scheduler for z/OS controller by editing the
                      mapping definitions.
                      To avoid any misuse, make sure that the member with the mapping
                      definitions is protected according to your security standards. Or use the
                      standard RACF TMEADMIN resource class in RACF to do the mapping.
                      To be able to audit what different JSC users do in Tivoli Workload
                      Scheduler for z/OS, we recommend that you establish a one-to-one
                      relationship between the TMF Administrator and the corresponding RACF
                      user. (That is, you should not allow multiple users to use the same TMF
                      Administrator by adding several different logons to one TMF Administrator.)


4.7.2 Installing and configuring Tivoli Management Framework 4.1
                As we have already discussed, for the new Job Scheduling Console interface to
                communicate with the scheduling engines, it requires that a few other
                components be installed. If you are still not sure how all of the pieces fit together,
                review 2.4, “Job Scheduling Console and related components” on page 89.

                When installing Tivoli Workload Scheduler 8.2 using the ISMP installer GUI, you
                are given the option to install the Tivoli Workload Scheduler connector. If you
                choose this option, the installer program automatically installs the following
                components:
                    Tivoli Management Framework 4.1, configured as a TMR server
                    Job Scheduling Services 1.2
                    Tivoli Workload Scheduler Connector 8.2

                The Tivoli Workload Scheduler 8.2 installer GUI will also automatically create a
                Tivoli Workload Scheduler connector instance and a TMF administrator
                associated with your Tivoli Workload Scheduler user. Letting the installer do the
                work of installing and configuring these components is generally a very good
                idea because it saves the trouble of performing each of these steps individually.

                If you choose not to let the Tivoli Workload Scheduler 8.2 installer install and
                configure these components for you, you can install them later. The following
                instructions for getting a TMR server installed and up and running, and to get Job
                Scheduling Services and the connectors installed, are primarily intended for
                environments that do not already have a TMR server, or one in which a separate
                TMR server will be installed for IBM Tivoli Workload Scheduler.



252   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
In the last part of this section, we discuss in more detail the steps specific to
          end-to-end scheduling: creating connector instances and TMF administrators.

          The Tivoli Management Framework is easy to install. If you already have the
          Framework installed in your organization, it is not necessary to install the
          components specific to Tivoli Workload Scheduler (the JSS and connectors) on a
          node in your existing Tivoli Managed Region. You may prefer to install a
          stand-alone TMR server solely for the purpose of providing the connection
          between the IBM Tivoli Workload Scheduler suite and its interface, the JSC. If
          your existing TMR is busy with other operations, such as monitoring or software
          distribution, you might want to consider installing a separate stand-alone TMR
          server for Tivoli Workload Scheduler. If you decide to install the JSS and
          connectors on an existing TMR server or managed node, you can skip to “Install
          Job Scheduling Services” and “Installing the connectors” on page 254.


4.7.3 Alternate method using Tivoli Management Framework 3.7.1
          If for some reason you need to use the older 3.7.1 version of TMF instead of the
          newer 4.1 version, you must first install TMF 3.7B and then upgrade it to 3.7.1.

           Note: If you are installing TMF 3.7B on AIX 5.1 or later, you will need an
           updated version of the TMF 3.7B CD because the original TMF 3.7B CD did
           not correctly recognize AIX 5 as a valid target platform.

           Order PTF U482278 to get this updated TMF 3.7B CD.


          Installing Tivoli Management Framework 3.7B
          The first step is to install Tivoli Management Framework Version 3.7B. For
          instructions, refer to the Tivoli Framework 3.7.1 Installation Guide, GC32-0395.

          Upgrade to Tivoli Management Framework 3.7.1
          Version 3.7.1 of Tivoli Management Framework is required by Job Scheduling
          Services 8.1, so if you do not already have Version 3.7.1 of the Framework
          installed, you must upgrade to it.

          Install Job Scheduling Services
          Follow the instructions in the IBM Tivoli Workload Scheduler Job Scheduling
          Console User’s Guide, Feature Level 1.3, SC32-1257. to install JSS. As we
          discussed in Chapter 2, “End-to-end scheduling architecture” on page 25, JSS is
          simply a library used by the Framework, and it is a prerequisite of the connectors.




                     Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   253
The hardware and software prerequisites for the Job Scheduling Services are:
                    Software
                    IBM Tivoli Management Framework: Version 3.7.1 or later for Microsoft®
                    Windows, AIX, HP-UX, Sun Solaris, and Linux.
                    Hardware
                    –    CD-ROM drive for installation
                    –    Approximately 4 MB of free disk space

                Job Scheduling Services is supported on the following platforms:
                    Microsoft Windows
                    – Windows NT 4.0 with Service Pack 6
                    – Windows 2000 Server or Advanced Server with Service Pack 3
                    IBM AIX Version 4.3.3, 5.1, 5.2
                    HP-UX PA-RISC Version 11.0, 11i
                    Sun Solaris Version 7, 8, 9
                    Linux Red Hat Version 7.2, 7.3
                    SuSE Linux Enterprise Server for x86 Version 8
                    SuSE Linux Enterprise Server for S/390® and zSeries (kernel 2.4, 31–bit)
                    Version 7 (new with this version)
                    Red Hat Linux for S/390 (31–bit) Version 7 (new with this version)

                Installing the connectors
                Follow the installation instructions in the IBM Tivoli Workload Scheduler Job
                Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257.

                When installing the Tivoli Workload Scheduler connector, we recommend that
                you do not select the Create Instance check box. Create the instances after the
                connector has been installed.

                The hardware and software prerequisites for the Tivoli Workload Scheduler for
                z/OS connector are:
                    Software:
                    – IBM Tivoli Management Framework: Version 3.7.1 or later
                    – Tivoli Workload Scheduler for z/OS 8.1, or Tivoli OPC 2.1 or later
                    – Tivoli Job Scheduling Services 1.3
                    – TCP/IP network communications



254   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
– A Tivoli Workload Scheduler for z/OS user account (required), which you
                can create beforehand or have the setup program create for you
              Hardware:
              –   CD-ROM drive for installation.
              – Approximately 3 MB of free disk space for the installation. In addition, the
                Tivoli Workload Scheduler for z/OS connector produces log files and
                temporary files, which are placed on the local hard drive.

           Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS connector are
           supported on the following platforms:
              Microsoft Windows
              – Windows NT 4.0 with Service Pack 6
              – Windows 2000 Server or Advanced Server with Service Pack 3
              IBM AIX Version 4.3.3, 5.1, 5.2
              HP-UX PA-RISC Version 11.0, 11i
              Sun Solaris Version 7, 8, 9
              Linux Red Hat Version 7.2, 7.3
              SuSE Linux Enterprise Server for x86 Version 8
              SuSE Linux Enterprise Server for S/390 and zSeries (kernel 2.4, 31–bit)
              Version 7 (new with this version)
              Red Hat Linux for S/390 (31–bit) Version 7 (new with this version)

           For more information, see IBM Tivoli Workload Scheduler Job Scheduling
           Console Release Notes, Feature level 1.3, SC32-1258.


4.7.4 Creating connector instances
           As we discussed in Chapter 2, “End-to-end scheduling architecture” on page 25,
           the connectors tell the Framework how to communicate with the different types of
           scheduling engine.

           To control the workload of the entire end-to-end scheduling network from the
           Tivoli Workload Scheduler for z/OS controller, it is necessary to create a Tivoli
           Workload Scheduler for z/OS connector instance to connect to that controller.

           It may also be a good idea to create a Tivoli Workload Scheduler connector
           instance on a fault-tolerant agent or domain manager. Sometimes the status may
           get out of sync between an FTA or DM and the Tivoli Workload Scheduler for
           z/OS controller. When this happens, it is helpful to be able to connect directly to
           that agent and get the status directly from there. Retrieving job logs (standard


                      Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   255
lists) is also much faster through a direct connection to the FTA than through the
                Tivoli Workload Scheduler for z/OS controller.

                Creating a Tivoli Workload Scheduler for z/OS connector
                instance
                You have to create at least one Tivoli Workload Scheduler for z/OS connector
                instance for each z/OS controller that you want to access with the Tivoli Job
                Scheduling Console. This is done using the wopcconn command.

                In our test environment, we wanted to be able to connect to a Tivoli Workload
                Scheduler for z/OS controller running on a mainframe with the host name
                twscjsc. On the mainframe, the Tivoli Workload Scheduler for z/OS TCP/IP
                server listens on TCP port 5000. Yarmouth is the name of the TMR-managed
                node where we created the connector instance. We called the new connector
                instance twsc. Here is the command we used:
                    wopcconn -create -h london -e TWSC -a twscjsc -p 5000

                The result of this will be that when we use JSC to connect to Yarmouth, a new
                connector instance called TWSC appears in the Job Scheduling list on the left
                side of the window. We can access the Tivoli Workload Scheduler for z/OS
                scheduling engine by clicking that new entry in the list.

                It is also possible to run wopcconn in interactive mode. To do this, just run
                wopcconn with no arguments.

                Refer to Appendix A, “Connector reference” on page 343 for a detailed
                description of the wopcconn command.

                Creating a Tivoli Workload Scheduler connector instance
                Remember that a Tivoli Workload Scheduler connector instance must have local
                access to the Tivoli Workload Scheduler engine with which it is associated. This
                is done using the wtwsconn.sh command.

                In our test environment, we wanted to be able to use JSC to connect to a Tivoli
                Workload Scheduler engine on the host Yarmouth. Yarmouth has two Tivoli
                Workload Scheduler engines installed, so we had to be sure to specify that the
                path of the Tivoli Workload Scheduler engine we specify when creating the
                connector is the path to the right Tivoli Workload Scheduler engine. We called
                the new connector instance TWS-A to reflect that this connector instance would
                be associated with the TWS-A engine on this host (as opposed to the other Tivoli
                Workload Scheduler engine, TWS-B). Here is the command we used:
                    wtwsconn.sh -create -h london -n London-A -t /tivoli/TWS/tws-a




256   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
The result is that when we use JSC to connect to Yarmouth, a new connector
           instance called TWS-A appears in the Job Scheduling list on the left side of the
           window. We can access the TWS-A scheduling engine by clicking that new entry
           in the list.

           Refer to Appendix A, “Connector reference” on page 343 for a detailed
           description of the wtwsconn.sh command.


4.7.5 Creating WTMF administrators for Tivoli Workload Scheduler
           When a user logs onto the Job Scheduling Console, the Tivoli Management
           Framework verifies that the user’s logon is listed in an existing TMF administrator.

           TMF administrators for Tivoli Workload Scheduler
           A Tivoli Management Framework administrator must be created for the Tivoli
           Workload Scheduler user. Additional TMF administrators can be created for other
           users who will access Tivoli Workload Scheduler using JSC.

           TMF administrators for Tivoli Workload Scheduler for z/OS
           The Tivoli Workload Scheduler for z/OS TCP/IP server associates the Tivoli
           administrator to an RACF user. If you want to be able to identify each user
           uniquely, one Tivoli Administrator should be defined for each RACF user. If
           operating system users corresponding to the RACF users do not already exist on
           the TMR server or on a managed node in the TMR, you must first create one OS
           user for each Tivoli administrator that will be defined. These users can be
           created on the TMR server of on any managed node in the TMR. After you have
           created those users, you can simply add those users’ logins to the TMF
           administrators that you create.

            Important: When creating users or setting their passwords, disable any option
            that requires the user to set a password at the first logon. If the operating
            system requires the user’s password to change at the first logon, the user will
            have to do this before he will be able to log on via the Job Scheduling Console.


           Creating TMF administrators
           If Tivoli Workload Scheduler 8.2 is installed using the graphical ISMP installer,
           you have the option of installing the Tivoli Workload Scheduler connector
           automatically during Tivoli Workload Scheduler installation. If you choose this
           option, the installer will create one TMF administrator automatically.

           We still recommend that you create one Tivoli Management Framework
           Administrator for each user who will use JSC.




                      Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   257
Perform the following steps from the Tivoli desktop to create a new TMF
                administrator:
                1. Double-click the Administrators icon and select Create → Administrator,
                   as shown in Figure 4-36.




                Figure 4-36 Create Administrator

                2. Enter the Tivoli Administrator name you want to create.
                3. Click Set Logins to specify the login name (Figure 4-37 on page 259). This
                   field is important because it is used to determine the UID with which many
                   operations are performed and represents a UID at the operating system level.




258   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 4-37 Create Administrator

4. Type in the login name and press Enter. Click Set & Close (Figure 4-38).




Figure 4-38 Set Login Names




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   259
5. Enter the name of the group. This field is used to determine the GID under
                   which many operations are performed. Click Set & Close.

                The TMR roles you assign to the administrator depend on the actions the user
                will need to perform.
                Table 4-7 Authorization roles required for connector actions
                  An Administrator with this role...              Can perform these actions

                  User                                            Use the instance
                                                                  View instance settings

                  Admin, senior, or super                         Use the instance
                                                                  View instance settings
                                                                  Create and remove instances
                                                                  Change instance settings
                                                                  Start and stop instances

                6. Click the Set TMR Roles icon and add the desired role or roles (Figure 4-39).




                Figure 4-39 Set TMR roles

                7. Click Set & Close to finish your input. This returns you to the Administrators
                   desktop (Figure 4-40 on page 261).




260   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 4-40 Tivoli Administrator desktop


4.7.6 Installing the Job Scheduling Console
           Tivoli Workload Scheduler for z/OS is shipped with the latest version (Version
           1.3) of the Job Scheduling Console. We recommend that you use this version
           because it contains the best functionality and stability.

           The JSC can be installed on the following platforms:
              Microsoft Windows
              – Windows NT 4.0 with Service Pack 6
              – Windows 2000 Server, Professional and Advanced Server with Service
                Pack 3
              – Windows XP Professional with Service Pack 1
              – Windows 2000 Terminal Services
              IBM AIX Version 4.3.3, 5.1, 5.2
               HP-UX PA-RISC 11.0, 11i
               Sun Solaris Version 7, 8, 9
               Linux Red Hat Version 7.2, 7.3
              SuSE Linux Enterprise Server for x86 Version 8



                      Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   261
Hardware and software prerequisites
                The following are the hardware and software prerequisites for the Job Scheduling
                Console.

                For use with Tivoli Workload Scheduler for z/OS
                    Software:
                    – IBM Tivoli Workload Scheduler for z/OS connector 1.3
                    – IBM Tivoli Workload Scheduler for z/OS 8.1 or OPC 2.1 or later
                    – Tivoli Job Scheduling Services 1.3
                    – TCP/IP network communication
                    – Java Runtime Environment Version 1.3
                    Hardware:
                    – CD-ROM drive for installation
                    – 70 MB disk space for full installation, or 34 MB for customized (English
                      base) installation plus approximately 4 MB for each additional language.

                For use with Tivoli Workload Scheduler
                    Software:
                    – IBM Tivoli Workload Scheduler connector 8.2
                    – IBM Tivoli Workload Scheduler 8.2
                    – Tivoli Job Scheduling Services 1.3
                    – TCP/IP network communication
                    – Java Runtime Environment Version 1.3

                      Note: You must use the same versions of the scheduler and the connector.

                    Hardware:
                    – CD-ROM drive for installation
                    – 70 MB disk space for full installation, or 34 MB for customized (English
                      base) installation plus approximately 4 MB for each additional language

                Note that the Tivoli Workload Scheduler for z/OS connector can support any
                Operations Planning and Control V2 release level as well as Tivoli Workload
                Scheduler for z/OS 8.1.

                For the most recent software requirements, refer to IBM Tivoli Workload
                Scheduler Job Scheduling Console Release Notes, Feature level 1.3,
                SC32-1258.


262   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
The following steps describe how to install the Job Scheduling Console:
1. Insert the Tivoli Job Scheduling Console CD-ROM into the system CD-ROM
   drive or mount the CD-ROM from a drive on a remote system. For this
   example, the CD-ROM drive is drive F.
2. Perform the following steps to run the installation command:
   – On Windows:
      •   From the Start menu, select Run to display the Run dialog.
      •   In the Open field, enter F:Install
   – On AIX:
      •   Type the following command:
          jre -nojit -cp install.zip install
      •   If that does not work, try:
          jre -nojit -classpath [path to] classes.zip:install.zip install
      •   If that does not work either, on sh-like shells, try:
          cd [to directory where install.zip is located] CLASSPATH=[path to]
          classes.zip:install.zip export CLASSPATH java -nojit install
      •   Or, for csh-like shells, try:
          cd [to directory where install.zip is located] setenv CLASSPATH [path
          to]
          classes.zip:install.zip java -nojit install
   – On Sun Solaris:
      •   Change to the directory where you downloaded install.zip before
          running the installer.
      •   Enter sh install.bin.
3. The splash window is displayed. Follow the prompts to complete the
   installation. Refer to IBM Tivoli Workload Scheduler Job Scheduling Console
   User’s Guide, Feature Level 1.3, SC32-1257 for more information about
   installation of JSC.

Starting the Job Scheduling Console
Use the following to start the JSC, depending on your platform:
   On Windows
   Depending on the shortcut location that you specified during installation, click
   the JS Console icon or select the corresponding item in the Start menu.




           Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling   263
On Windows 95 and Windows 98
                    You can also start the JSC from the command line. Type runcon from the
                    binjava subdirectory of the installation path.
                    On AIX
                    Type ./AIXconsole.sh
                    On Sun Solaris
                    Type ./SUNconsole.sh

                A Tivoli Job Scheduling Console start-up window is displayed (Figure 4-41).




                Figure 4-41 JSC login window

                Enter the following information and click the OK button to proceed:
                User name                  The user name of the person who has permission to use
                                           the Tivoli Workload Scheduler for z/OS connector
                                           instances
                Password                   The password for the Tivoli Framework administrator
                Host Machine               The name of the Tivoli-managed node that runs the Tivoli
                                           Workload Scheduler for z/OS connector




264   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
5


    Chapter 5.    End-to-end implementation
                  scenarios and examples
                  In this chapter, we describe different scenarios and examples for Tivoli Workload
                  Scheduler for z/OS end-to-end scheduling.

                  We describe and show:
                      “Description of our environment and systems” on page 266
                      “Creation of the Symphony file in detail” on page 273
                      “Migrating Tivoli OPC tracker agents to end-to-end scheduling” on page 274
                      “Conversion from Tivoli Workload Scheduler network to Tivoli Workload
                      Scheduler for z/OS managed network” on page 288
                      “Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios” on
                      page 303
                      “Backup and maintenance guidelines for FTAs” on page 318
                      “Security on fault-tolerant agents” on page 323
                      “End-to-end scheduling tips and tricks” on page 331




© Copyright IBM Corp. 2004                                                                     265
5.1 Description of our environment and systems
                In this section, we describe the systems and configuration we used for the
                end-to-end test scenarios when working on this redbook.

                Figure 5-1 shows the systems and configuration that are used for the end-to-end
                scenarios. All of the systems are connected using TCP/IP connections.


                   MASTERDM
                                                                           z/OS Sysplex
                                 Standby                                                                                  Standby
                                 Engine                           Master Domain           z/OS                            Engine
                                                                    Manager               wtsc64
                                                z/OS              OPCMASTER               9.12.6.9              z/OS
                                              wtsc63                                                           wtsc65
                                              9.12.6.8                                                        9.12.6.10


                                                                                                                S SL
                   UK                                              Europe                                                                Nordic

                          Domain              AIX                        Domain         Windows 2000      Domain                 AIX
                          Manager             london                     Manager        geneva            Manager                stockholm
                           U000               9.3.4.63                    E000          9.3.4.185          N000                  9.3.4.47

                                                                                                                  Firewall & Router
                                                                                                                    L                 SSL
                                                                                                                 SS
                      FTA          AIX        FTA           W2K        FTA        AIX   FTA      W2K   FTA        W2K             FTA         Linux
                      U001                    U002                     E001             E002           N001                       N002




                                                                                                                          SSL
                            belfast              edinburgh                 rome           amsterdam         oslo                         helsinki
                           9.3.4.64              9.3.4.188               9.3.4.122         9.3.4.187     10.2.3.184                     10.2.3.190
                                                                                                                  FTA           W2K
                     UX01         UX02
                                                                                                                  N003
                     unixlocl
                      unixlocl    unixrsh
                                   unixrsh
                                                                                                                    copenhagen
                   Extended Agents
                                                                                                                     10.2.3.189


                                                                                                                                Linux
                                                   AIX
                                    Remote                                                                    Firewall          reykjavik
                                    AIX box        dublin                                                     & Router          9.3.4.129
                                                                                                                                10.2.3.2


                Figure 5-1 Systems and configuration used in end-to-end scheduling test scenarios

                We defined the following started task procedure names in z/OS:
                TWST                                         For the Tivoli Workload Scheduler for z/OS agent
                TWSC                                         For the Tivoli Workload Scheduler for z/OS engine
                TWSCE2E                                      For the end-to-end server
                TWSCJSC                                      For the Job Scheduling Console server

                In the following sections we have listed the started task procedure for our
                end-to-end server and the different initialization statements defined for the
                end-to-end scheduling network in Figure 5-1.




266   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Started task procedure for the end-to-end server (TWSCE2E)
Example 5-1 shows the started task procedure for the Tivoli Workload Scheduler
for z/OS end-to-end server, TWSCE2E.
Example 5-1 Started task procedure for the end-to-end server TWSCE2E
//TWSCE2E EXEC PGM=EQQSERVR,REGION=64M,TIME=1440
//* NOTE: 64M IS THE MINIMUM REGION SIZE FOR E2E (SEE PQ78043)
//*********************************************************************
//* THIS IS A STARTED TASK PROCEDURE FOR AN OPC SERVER DEDICATED
//* FOR END-TO-END SCHEDULING.
//*********************************************************************
//STEPLIB DD DISP=SHR,DSN=EQQ.SEQQLMD0
//EQQMLIB DD DISP=SHR,DSN=EQQ.SEQQMSG0
//EQQMLOG DD SYSOUT=*
//EQQPARM DD DISP=SHR,DSN=TWS.INST.PARM(TWSCE2E)
//SYSMDUMP DD DISP=SHR,DSN=TWS.INST.SYSDUMPS
//EQQDUMP DD DISP=SHR,DSN=TWS.INST.EQQDUMPS
//EQQTWSIN DD DISP=SHR,DSN=TWS.INST.TWSC.TWSIN -> INPUT TO CONTROLLER
//EQQTWSOU DD DISP=SHR,DSN=TWS.INST.TWSC.TWSOU -> OUTPUT FROM CONT.
//EQQTWSCS DD DISP=SHR,DSN=TWS.INST.CS           -> CENTRALIZED SCRIPTS


The end-to-end server (TWSCE2E) initialization statements
Example 5-2 defines the initialization statements for the end-to-end scheduling
network shown in Figure 5-1 on page 266.
Example 5-2 End-to-end server (TWSCE2E) initialization statements
/*********************************************************************/
/* SERVOPTS: run-time options for end-to-end server                  */
/*********************************************************************/
SERVOPTS SUBSYS(TWSC)
/*-------------------------------------------------------------------*/
/* TCP/IP server is needed for end-to-end usage.                     */
/*-------------------------------------------------------------------*/
         PROTOCOL(E2E)               /* This server is for E2E "only"*/
         TPLGYPRM(TOPOLOGY)          /* E2E topology definition mbr. */
/*-------------------------------------------------------------------*/
/* If you want to use Automatic Restart manager you must specify:    */
/*-------------------------------------------------------------------*/
         ARM(YES)                    /* Use ARM to restart if abend */




                       Chapter 5. End-to-end implementation scenarios and examples   267
Example 5-3 shows the TOPOLOGY initialization statements.
                Example 5-3 TOPOLOGY initialization statements; member name is TOPOLOGY
                /**********************************************************************/
                /* TOPOLOGY: End-to-End options                                       */
                /**********************************************************************/
                TOPOLOGY TPLGYMEM(TPDOMAIN)             /* Mbr. with domain+FTA descr.*/
                         USRMEM(TPUSER)                 /* Mbr. with Windows user+pw */
                         BINDIR('/usr/lpp/TWS/V8R2M0') /* The TWS for z/OS inst. dir */
                         WRKDIR('/tws/twsce2ew')        /* The TWS for z/OS work dir */
                         LOGLINES(200)                  /* Lines sent by joblog retr. */
                         TRCDAYS(10)                    /* Days to keep stdlist files */
                         CODEPAGE(IBM-037)              /* Codepage for translator    */
                         TCPIPJOBNAME(TCPIP)            /* Name of TCPIP started task */
                         ENABLELISTSECCHK(N)            /* CHECK SEC FILE FOR LIST? */
                         PLANAUDITLEVEL(0)              /* Audit level on DMs&FTAs    */
                         GRANTLOGONASBATCH(Y)           /* Automatically grant right? */
                         HOSTNAME(twsce2e.itso.ibm.com) /* DNS hostname for server    */
                         PORTNUMBER(31111)              /* Port for netman in USS     */


                Example 5-4 shows DOMREC and CPUREC initialization statements for the
                network in Figure 5-1 on page 266.
                Example 5-4 Domain and fault-tolerant agent definitions; member name is TPDOMAIN
                /**********************************************************************/
                /* DOMREC: Defines the domains in the distributed Tivoli Workload     */
                /*         Scheduler network                                          */
                /**********************************************************************/
                /*--------------------------------------------------------------------*/
                /* Specify one DOMREC for each domain in the distributed network.     */
                /* With the exception of the master domain (whose name is MASTERDM    */
                /* and consist of the TWS for z/OS controller).                       */
                /*--------------------------------------------------------------------*/
                DOMREC DOMAIN(UK)                   /* Domain name   = UK             */
                         DOMMNGR(U000)              /* Domain manager= FLORENCE       */
                         DOMPARENT(MASTERDM)        /* Domain parent = MASTERDM       */
                DOMREC DOMAIN(Europe)               /* Domain name   = Europe         */
                         DOMMNGR(E000)              /* Domain manager= Geneva         */
                         DOMPARENT(MASTERDM)        /* Domain parent = MASTERDM       */
                DOMREC DOMAIN(Nordic)               /* Domain name   = Nordic         */
                         DOMMNGR(N000)              /* Domain manager= Stockholm      */
                         DOMPARENT(MASTERDM)        /* Domain parent = MASTERDM       */
                /**********************************************************************/
                /**********************************************************************/
                /* CPUREC: Defines the workstations in the distributed Tivoli         */
                /*         Workload Scheduler network                                 */
                /**********************************************************************/
                /*--------------------------------------------------------------------*/



268   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
/* You must specify one CPUREC for workstation in the TWS network     */
/* with the exception of OPC Controller which acts as Master Domain   */
/* Manager                                                            */
/*--------------------------------------------------------------------*/
CPUREC CPUNAME(U000)                 /* DM of UK domain               */
         CPUOS(AIX)                /* Windows operating system   */
         CPUNODE(london.itsc.austin.ibm.com)     /* Hostname of CPU */
         CPUTCPIP(31182)             /* TCP port number of NETMAN     */
         CPUDOMAIN(UK)               /* The TWS domain name for CPU   */
         CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT   */
         CPUAUTOLNK(ON)              /* Autolink is on for this CPU   */
         CPUFULLSTAT(ON)             /* Full status on for DM         */
         CPURESDEP(ON)               /* Resolve dependencies on for DM*/
         CPULIMIT(20)                /* Number of jobs in parallel    */
         CPUTZ(CST)                  /* Time zone for this CPU        */
         CPUUSER(maestro)            /* Default user for jobs on CPU */
CPUREC CPUNAME(E000)                 /* DM of Europe domain           */
         CPUOS(WNT)                  /* Windows 2000 operating system */
         CPUNODE(geneva.itsc.austin.ibm.com)       /* Hostname of CPU */
         CPUTCPIP(31182)             /* TCP port number of NETMAN     */
         CPUDOMAIN(Europe)           /* The TWS domain name for CPU */
         CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT   */
         CPUAUTOLNK(ON)              /* Autolink is on for this CPU   */
         CPUFULLSTAT(ON)             /* Full status on for DM         */
         CPURESDEP(ON)               /* Resolve dependencies on for DM*/
         CPULIMIT(20)                /* Number of jobs in parallel    */
         CPUTZ(CST)                  /* Time zone for this CPU        */
         CPUUSER(tws)                /* Default user for jobs on CPU */
CPUREC CPUNAME(N000)                 /* DM of Nordic domain           */
         CPUOS(AIX)                 /* AIX operating system          */
         CPUNODE(stockholm.itsc.austin.ibm.com)    /* Hostname of CPU */
         CPUTCPIP(31182)             /* TCP port number of NETMAN     */
         CPUDOMAIN(Nordic)           /* The TWS domain name for CPU */
         CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT   */
         CPUAUTOLNK(ON)              /* Autolink is on for this CPU   */
         CPUFULLSTAT(ON)             /* Full status on for DM         */
         CPURESDEP(ON)               /* Resolve dependencies on for DM*/
         CPULIMIT(20)                /* Number of jobs in parallel    */
         CPUTZ(CST)                  /* Time zone for this CPU        */
         CPUUSER(tws)                /* Default user for jobs on CPU */
CPUREC CPUNAME(U001)                 /* 1st FTA in UK domain          */
         CPUOS(AIX)                  /* AIX operating system          */
         CPUNODE(belfast.itsc.austin.ibm.com)      /* Hostname of CPU */
         CPUTCPIP(31182)             /* TCP port number of NETMAN     */
         CPUDOMAIN(UK)               /* The TWS domain name for CPU   */
         CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT   */
         CPUAUTOLNK(ON)              /* Autolink is on for this CPU   */
         CPUFULLSTAT(OFF)            /* Full status off for FTA       */
         CPURESDEP(OFF)              /* Resolve dep. off for FTA      */



                      Chapter 5. End-to-end implementation scenarios and examples   269
CPULIMIT(20)                /* Number of jobs in parallel    */
                           CPUSERVER(1)                /* Not allowed for DM/XAGENT CPU */
                           CPUTZ(CST)                  /* Time zone for this CPU        */
                           CPUUSER(tws)                /* Default user for jobs on CPU */
                CPUREC     CPUNAME(U002)               /* 2nd FTA in UK domain          */
                           CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT   */
                           CPUOS(WNT)                  /* Windows 2000 operating system */
                           CPUNODE(edinburgh.itsc.austin.ibm.com)    /* Hostname of CPU */
                           CPUTCPIP(31182)             /* TCP port number of NETMAN     */
                           CPUDOMAIN(UK)               /* The TWS domain name for CPU   */
                           CPUAUTOLNK(ON)              /* Autolink is on for this CPU   */
                           CPUFULLSTAT(OFF)            /* Full status off for FTA       */
                           CPURESDEP(OFF)              /* Resolve dep. off for FTA      */
                           CPULIMIT(20)                /* Number of jobs in parallel    */
                           CPUSERVER(2)                /* Not allowed for DM/XAGENT CPU */
                           CPUTZ(CST)                  /* Time zone for this CPU        */
                           CPUUSER(tws)                /* Default user for jobs on CPU */
                           CPUUSER(tws)                /* Default user for jobs on CPU */
                CPUREC     CPUNAME(E001)               /* 1st FTA in Europe domain      */
                           CPUOS(AIX)                  /* AIX operating system          */
                           CPUNODE(rome.itsc.austin.ibm.com)         /* Hostname of CPU */
                           CPUTCPIP(31182)             /* TCP port number of NETMAN     */
                           CPUDOMAIN(Europe)           /* The TWS domain name for CPU */
                           CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT   */
                           CPUAUTOLNK(ON)              /* Autolink is on for this CPU   */
                           CPUFULLSTAT(OFF)            /* Full status off for FTA       */
                           CPURESDEP(OFF)              /* Resolve dep. off for FTA      */
                           CPULIMIT(20)                /* Number of jobs in parallel    */
                           CPUSERVER(1)                /* Not allowed for domain mng.   */
                           CPUTZ(CST)                  /* Time zone for this CPU        */
                           CPUUSER(tws)                /* Default user for jobs on CPU */
                CPUREC     CPUNAME(E002)               /* 2nd FTA in Europe domain      */
                           CPUOS(WNT)                  /* Windows 2000 operating system */
                           CPUNODE(amsterdam.itsc.austin.ibm.com)    /* Hostname of CPU */
                           CPUTCPIP(31182)             /* TCP port number of NETMAN     */
                           CPUDOMAIN(Europe)           /* The TWS domain name for CPU */
                           CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT   */
                           CPUAUTOLNK(ON)              /* Autolink is on for this CPU   */
                           CPUFULLSTAT(OFF)            /* Full status off for FTA       */
                           CPURESDEP(OFF)              /* Resolve dep. off for FTA      */
                           CPULIMIT(20)                /* Number of jobs in parallel    */
                           CPUSERVER(2)                /* Not allowed for domain mng.   */
                            CPUTZ(CST)                  /* Time zone for this CPU        */
                            CPUUSER(tws)                /* Default user for jobs on CPU */
                CPUREC     CPUNAME(N001)               /* 1st FTA in Nordic domain      */
                            CPUOS(WNT)                  /* Windows 2000 operating system */
                           CPUNODE(oslo.itsc.austin.ibm.com)         /* Hostname of CPU */
                           CPUTCPIP(31182)             /* TCP port number of NETMAN     */
                           CPUDOMAIN(Nordic)           /* The TWS domain name for CPU */



270   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT           */
         CPUAUTOLNK(ON)              /* Autolink is on for this CPU           */
         CPUFULLSTAT(OFF)            /* Full status off for FTA               */
         CPURESDEP(OFF)              /* Resolve dep. off for FTA              */
         CPULIMIT(20)                /* Number of jobs in parallel            */
         CPUSERVER(1)                /* Not allowed for domain mng.           */
         CPUTZ(CST)                  /* Time zone for this CPU                */
         CPUUSER(tws)                /* Default user for jobs on CPU          */
         SSLLEVEL(OFF)               /* Use SSL? ON/OFF/ENABLED/FORCE         */
         SSLPORT(31382)              /* Port for SSL communication            */
         FIREWALL(Y)                 /* Is CPU behind a firewall?             */
CPUREC   CPUNAME(N002)               /* 2nd FTA in Nordic domain              */
         CPUOS(UNIX)                 /* Linux operating system                */
         CPUNODE(helsinki.itsc.austin.ibm.com)     /* Hostname of CPU         */
         CPUTCPIP(31182)             /* TCP port number of NETMAN             */
         CPUDOMAIN(Nordic)           /* The TWS domain name for CPU           */
         CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT           */
         CPUAUTOLNK(ON)              /* Autolink is on for this CPU           */
         CPUFULLSTAT(OFF)            /* Full status off for FTA               */
         CPURESDEP(OFF)              /* Resolve dep. off for FTA              */
         CPULIMIT(20)                /* Number of jobs in parallel            */
         CPUSERVER(2)                /* Not allowed for domain mng.           */
         CPUTZ(CST)                  /* Time zone for this CPU                */
         CPUUSER(tws)                /* Default user for jobs on CPU          */
         SSLLEVEL(OFF)               /* Use SSL? ON/OFF/ENABLED/FORCE         */
         SSLPORT(31382)              /* Port for SSL communication            */
         FIREWALL(Y)                 /* Is CPU behind a firewall?             */
CPUREC   CPUNAME(N003)               /* 3rd FTA in Nordic domain              */
         CPUOS(WNT)                  /* Windows 2000 operating system         */
         CPUNODE(copenhagen.itsc.austin.ibm.com)   /* Hostname of CPU         */
         CPUTCPIP(31182)             /* TCP port number of NETMAN             */
         CPUDOMAIN(Nordic)           /* The TWS domain name for CPU           */
         CPUTYPE(FTA)                /* CPU type: FTA/SAGENT/XAGENT           */
         CPUAUTOLNK(ON)              /* Autolink is on for this CPU           */
         CPUFULLSTAT(OFF)            /* Full status off for FTA               */
         CPURESDEP(OFF)              /* Resolve dep. off for FTA              */
         CPULIMIT(20)                /* Number of jobs in parallel            */
         CPUSERVER(3)                /* Not allowed for domain mng.           */
         CPUTZ(CST)                  /* Time zone for this CPU                */
         CPUUSER(tws)                /* Default user for jobs on CPU          */
         SSLLEVEL(OFF)               /* Use SSL? ON/OFF/ENABLED/FORCE         */
         SSLPORT(31382)              /* Port for SSL communication            */
          FIREWALL(Y)                 /* Is CPU behind a firewall?             */
CPUREC   CPUNAME(UX01)               /* X-agent in UK Domain                  */
         CPUOS(OTHER)                /* Extended agent                        */
         CPUNODE(belfast.itsc.austin.ibm.com   /* Hostname of CPU             */
         CPUDOMAIN(UK)               /* The TWS domain name for CPU           */
         CPUHOST(U001)               /* U001 is the host for x-agent          */
         CPUTYPE(XAGENT)             /* This is an extended agent             */



                      Chapter 5. End-to-end implementation scenarios and examples   271
CPUACCESS(unixlocl)         /* use unixlocl access method     */
                           CPULIMIT(2)                 /* Number of jobs in parallel     */
                           CPUTZ(CST)                  /* Time zone for this CPU         */
                           CPUUSER(tws)                /* Default user for jobs on CPU   */
                CPUREC     CPUNAME(UX02)               /* X-agent in UK Domain           */
                           CPUOS(OTHER)                /* Extended agent                 */
                           CPUNODE(belfast.itsc.austin.ibm.com   /* Hostname of CPU      */
                           CPUDOMAIN(UK)               /* The TWS domain name for CPU    */
                           CPUHOST(U001)               /* U001 is the host for x-agent   */
                           CPUTYPE(XAGENT)             /* This is an extended agent      */
                           CPUACCESS(unixrsh)          /* use unixrsh access method      */
                           CPULIMIT(2)                 /* Number of jobs in parallel     */
                           CPUTZ(CST)                  /* Time zone for this CPU         */
                           CPUUSER(tws)                /* Default user for jobs on CPU   */


                User and password definitions for the Windows fault-tolerant workstations are
                defined as shown in Example 5-5.
                Example 5-5 User and password defintion for Windows FTAs; member name is TPUSER
                /*********************************************************************/
                /* USRREC: Windows users password definitions                        */
                /*********************************************************************/
                /*-------------------------------------------------------------------*/
                /* You must specify at least one USRREC for each Windows workstation */
                /* in the distributed TWS network.                                   */
                /*-------------------------------------------------------------------*/
                USRREC USRCPU(U002)
                       USRNAM(tws)
                       USRPSW('tws')
                USRREC USRCPU(E000)
                       USRNAM(tws)
                       USRPSW('tws')
                USRREC USRCPU(E000)
                       USRNAM(tws)
                       USRPSW('tws')
                USRREC USRCPU(N001)
                       USRNAM(tws)
                       USRPSW('tws')
                USRREC USRCPU(N003)
                       USRNAM(tws)
                       USRPSW('tws')




272   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
5.2 Creation of the Symphony file in detail
         A new Symphony file is generated whenever any of these daily planning batch
         jobs is run:
            Extend the current plan.
            Replan the current plan.
            Renew the Symphony.

         Daily planning batch jobs must be able to read from and write to the HFS working
         directory (WRKDIR) because these jobs create the Symnew file in WRKDIR. For
         this reason, the group associated with WRKDIR must contain all of the users that
         will run daily planning batch jobs.

         The end-to-end server task starts the translator process in USS (via the starter
         process). The translator process inherits its ownership from the starting task, so
         it runs as the same user as the end-to-end server task.

         The translator process must be able to read from and write to the HFS working
         directory (WRKDIR). For this reason, WRKDIR must be owned by the user
         associated with the end-to-end server started task (E2ESERV in the following
         example). This underscores the importance of specifying the correct user and
         group in EQQPCS05.

         Figure 5-2 shows the steps of Symphony file creation:
         1. The daily planning batch job copies the Symphony Current Plan VSAM data
            set to an HFS file in WRKDIR called SymUSER, where USER is the user
            name of the user who submitted the batch job.
         2. The daily planning batch job renames SymUSER to Symnew.
         3. The translator program running in UNIX System Services copies Symnew to
            Symphony and Sinfonia.




                                Chapter 5. End-to-end implementation scenarios and examples   273
z/OS        End-to-end          Daily Planning
                              Server Task          Batch Jobs
                                                 EXTENDCP
                                                 EXTENDCP
                                                                                      Symphony Current Plan
                                                    REPLANCP
                                                    REPLANCP               EQQSCPDS   VSAM Data Set
                                E2ESERV
                                E2ESERV              REFRESHCP
                                                     REFRESHCP
                   Run as E2ESERV                      SYMRENEW
                                                       SYMRENEW
                                                    Run as USER3,
                                                 a member of TWSGRP

                  USS
                                                                                  1
                  BINDIR                                    WRKDIR
                                                          USER3:TWSGRP     SymUSER3

                                starter
                                starter                                           2
                                                          USER3:TWSGRP      Symnew
                              translator
                              translator                                               3
                           started as E2ESERV
                                                          E2ESERV:TWSGRP   Symphony        Sinfonia



                Figure 5-2 Creation of the Symphony file in WRKDIR

                This illustrates how process ownership of the translator program is inherited from
                the end-to-end server task. The figure also shows how file ownership of Symnew
                and Symphony are inherited from the daily planning batch jobs and translator,
                respectively.



5.3 Migrating Tivoli OPC tracker agents to end-to-end
scheduling
                In this section, we describe how to migrate from a Tivoli OPC tracker agent
                scheduling environment to a Tivoli Workload Scheduler for z/OS end-to-end
                scheduling environment with Tivoli Workload Scheduler fault-tolerant agents. We
                show the benefits of migrating to the fault-tolerant workstations with a
                step-by-step migration procedure.


5.3.1 Migration benefits
                If you plan to migrate to the end solution you can gain the following advantages:
                    The use of fault-tolerant technology enables you to continue scheduling
                    without a continuous connection to the z/OS engine.



274   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Multi-tier architecture enables you to configure your distributed environment
into logical and geographic needs through the domain topology.
The monitoring of workload can be separated, based on dedicated distributed
views.
The multi-tier architecture also improves scalability and removes the limitation
on the number of tracker agent workstations in Tivoli OPC. (In Tivoli OPC, the
designated maximum number of tracer agents was 999, but the practical limit
was around 500.)
High availability configuration through:
– The support of AIX High Availability Cluster Multi-Processing (HACMP™),
  HP Service Guard, and Windows clustering, for example.
– Support for using host names instead of numeric IP addresses.
– The ability to change workstation addresses as well as distributed network
  topology without recycling the Tivoli Workload Scheduler for z/OS
  controller. It only requires a plan replan.
New supported platforms and operating systems, such as:
– Windows 2000 and Windows XP
– SuSE Linux Enterprise Server for zSeries Version 7
– Red Hat Linux (Intel®) Version 7.2, 7.3
– Other third-party access methods such as Tandem
For a complete list of supported platforms and operating system levels, refer
to IBM Tivoli Workload Scheduler Release Notes Version 8.2 (Maintenance
Release April 2004), SC32-1277.
Support for extended agents.
Extended agents (XA) are used to extend the job scheduling functions of
Tivoli Workload Scheduler to other systems and applications. An extended
agent is defined as a workstation that has a host and an access method.
Extended agents makes it possible to run jobs in the end-to-end scheduling
solution on:
– Oracle E-Business Suite
– PeopleSoft
– SAP R/3
For more information, refer to IBM Tivoli Workload Scheduler for Applications
User’s Guide Version 8.2 (Maintenance Release April 2004), SC32-1278.
Open extended agent interface, which enables you to write extended agents
for non-supported platforms and applications. For example, you can write



                    Chapter 5. End-to-end implementation scenarios and examples   275
your own extended agent for Tivoli Storage Manager. For more information,
                    refer to Implementing TWS Extended Agent for Tivoli Storage Manager,
                    GC24-6030.
                    User ID and password definitions for Windows fault-tolerant workstations are
                    easier to implement and maintain. Does not require use of the Tivoli OPC
                    Tracker agent impersonation support.
                    IBM Tivoli Business Systems Manager support enables you to integrate the
                    entire end-to-end environment.
                    If you use alternate workstations for your tracker agents, be aware that this
                    function is not available in fault-tolerant agents. As part of the fault-tolerant
                    technology, a FTW cannot be an alternate workstation.
                    You do not have to touch your planning-related definitions such as run cycles,
                    periods, and calendars.


5.3.2 Migration planning
                Before starting the migration process, you may consider the following issues:
                    The Job Migration Tool in Tivoli Workload Scheduler for z/OS 8.2 can be used
                    to facilitate the migration from distributed tracker agents to Tivoli Workload
                    Scheduler distributed agents.
                    You may choose to not migrate your entire tracker agent environment at once.
                    For better planning, we recommend first deciding which part of your tracker
                    environment is more eligible to migrate. This enables you to smoothly migrate
                    to the new fault-tolerant agents. The proper decision can be based on:
                    – Agents belonging to a certain business unit
                    – Agents running at a specific location or time zone
                    – Agents having dependencies to Tivoli Workload Scheduler for z/OS job
                      streams
                    – Agents used for testing purposes
                    The tracker agents topology is not based on any domain manager structure
                    as used in the Tivoli Workload Scheduler end-to-end solution, so plan the
                    topology configuration that suits your needs. The guidelines for helping you
                    find your best configuration are detailed in 3.5.4, “Network planning and
                    considerations” on page 141.
                    Even though you can use centralized scripts to facilitate the migration from
                    distributed tracker agents to Tivoli Workload Scheduler distributed agents, it
                    may be necessary to make some modifications to the JCL (the script) used at




276   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
tracker agents when the centralized script for the corresponding fault-tolerant
              workstation is copied or moved.
              For example, this is the case for comments:
              – In JCL for tracker agent, a comment line can commence with //*
                     //* This is a comment line
              – In centralized script, a comment line can commence with //* OPC
                     //* OPC This is a comment line

            Tip: We recommend starting the migration with the less critical workload in the
            environment. The migration process needs some handling and experience;
            therefore you could start by migrating a test tracker agent with test scripts. If
            this is successful, you can continue with less critical production job streams
            and progress to the most important ones.

           If centralized script is used, the migration from tracker agents to fault-tolerant
           workstation should be a simple task. Basically, the migration is done simply by
           changing the workstation name from the name of a tracker agent workstation to
           the name of the new fault-tolerant workstation. This is even more true if you
           follow the migration checklist that is outlined in the following sections.

           Also note that with centralized script you can assign a user to a fault-tolerant
           workstation job exactly the same way as you did for tracker agents (for example
           by use of the job submit exit, EQQUX001).

            Important: Tivoli OPC tracker agent went out of support on October 31, 2003.


5.3.3 Migration checklist
           To guide you through the migration, Table 5-1 provides a step-by-step checklist.
           Table 5-1 Migration checklist
            Migration actions                            Page

            1. Install IBM Tivoli Workload Scheduler     “Installing IBM Tivoli Workload Scheduler
               end-to-end on z/OS mainframe.             end-to-end solution” on page 278

            2. Install fault-tolerant agents on each     “Installing fault-tolerant agents” on
               tracker agent server or system that       page 279
               should be migrated to end-to-end.

            3. Define the topology for the distributed   “Define the network topology in the
               Tivoli Workload Scheduler network.        end-to-end environment” on page 279




                                    Chapter 5. End-to-end implementation scenarios and examples   277
Migration actions                               Page

                  4. Decide if centralized, non-centralized,      “Decide to use centralized or
                     or a combination of centralized and          non-centralized script” on page 281
                     non-centralized script should be used.

                  5. Define centralized script.                   “Define the centralized script” on
                                                                  page 284

                  6. Define non-centralized script.               “Define the non-centralized script” on
                                                                  page 285

                  7. Define user ID and password for              “Define the user and password for
                     Windows fault-tolerant workstations.         Windows FTWs” on page 285

                  8. Change the workstation name inside           “Change the workstation name inside the
                     the job streams from tracker agent           job streams” on page 285
                     workstation name to fault-tolerant
                     workstation name.

                  9. Consider doing some parallel testing         “Parallel testing” on page 286
                     before the definitive shift from tracker
                     agents to fault-tolerant agents.

                  10. Perform the cutover.                        “Perform the cutover” on page 287

                  11. Educate and train planners and              “Education and training of operators and
                      operators.                                  planners” on page 287


5.3.4 Migration actions
                We now explain each step of the migration actions listed in Table 5-1 in detail.

                Installing IBM Tivoli Workload Scheduler end-to-end solution
                The Tivoli Workload Scheduler for z/OS end-to-end feature is required for the
                migration, and its installation and configuration are detailed in 4.2, “Installing
                Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 159.

                  Important: It is important to start the installation of the end-to-end solution as
                  early as possible in the migration process to gain as much experience as
                  possible with this new environment before it should be handled in the
                  production environment.

                End-to-end scheduling is not complicated, but job scheduling on the distributed
                systems works very differently in an end-to-end environment than in the tracker
                agent scheduling environment.




278   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Installing fault-tolerant agents
When you have decided which tracker agents to migrate, you can install the Tivoli
Workload Scheduler code on the machines or servers that host the tracker agent.
This enables you to migrate a mixed environment of tracker agents and
fault-tolerant workstations in a more controlled way, because both environments
(Tivoli Workload Scheduler and Tivoli OPC Tracker Agents) can coexist on the
same physical machine.

Both environments might coexist until you decide to perform the cutover. Cutover
means switching to the fault-tolerant agent after the testing phase.

Installation of the fault-tolerant agents is explained in detail in 4.3, “Installing
Tivoli Workload Scheduler in an end-to-end environment” on page 207.


Define the network topology in the end-to-end environment
In Tivoli Workload Scheduler for z/OS, define the topology of the Tivoli Workload
Scheduler network. The defintion process contains the following steps:
1. Designing the end-to-end network topology.
2. Definition of the network topology in Tivoli Workload Scheduler for z/OS with
   the DOMREC and CPUREC keywords.
3. Definition of the fault-tolerant workstations in the Tivoli Workload Scheduler
   for z/OS database.
4. Activation of the fault-tolerant workstation in the Tivoli Workload Scheduler for
   z/OS plan by a plan extend or plan replan batchjob.

     Tips:
        If you decide to define a topology with domain managers, you should
        also define backup domain managers.
        To better distinguish the fault-tolerant workstations, follow a consistent
        naming convention.

After completion of the definition process, each workstation should be defined
twice, once for the tracker agent and once for the distributed agent. This way, you
can run a distributed agent and a tracker agent on the same computer or server.
This should enable you to gradually migrate jobs from tracker agents to
distributed agents.

Example: From tracker agent network to end-to-end network
In this example, we illustrate how an existing tracker agent network can be
reflected in (or converted to) an end-to-end network topology. This example also



                         Chapter 5. End-to-end implementation scenarios and examples   279
shows the major differences between tracker agent network topology and the
                end-to-end network topology.

                In Figure 5-3, we have what we can call a classical tracker agent environment.
                This environment consists of multiple tracker agents on various operating
                platforms. All communication with all tracker agents are handled by a single
                subtask in the Tivoli Workload Scheduler for z/OS controller started task and
                does not use any kind of domain structure with multiple levels (tiers) to minimized
                the load on the controller.


                                                                       z/OS
                                                       OPC
                                                     Controller




                          Tracker                                                             Tracker
                          Agents                                                              Agents


                                    AIX        AIX         OS/400      AIX    Solaris   AIX



                Figure 5-3 A classic tracker agent environment

                Figure 5-3 shows what we can call a classical tracker agent environment. This
                environment consists of multiple tracker agents on various operating platforms
                and does not follow any domain topology

                Figure 5-4 shows how the tracker agent environment in Figure 5-3 on page 280
                can be defined in an end-to-end scheduling environment by use of domain
                managers, back-up domain managers, and fault-tolerant agents.




280   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
MASTERDM

                                                     z/OS
                       Master Domain                 Controller &
                         Manager                     End-to-end server
                       OPCMASTER




 DomainA                                                                   DomainB
                            AIX                                              AIX
             Domain                                         Domain
             Manager                                        Manager
              FDMA                                           FDMB




    FTA1                                           FTA3
   BDM for               FTA2                     BDM for                 FTA4
   DomainA                                        DomainB
              AIX                 OS/400                     AIX                   Solaris


Figure 5-4 End-to-end scheduling network with DMs and FTAs

In the migration phase, it is possible for these two environments to co-exist. This
means that on every machine, a tracker agent and a fault-tolerant workstation
are installed.

Decide to use centralized or non-centralized script
When migrating from tracker agents to fault-tolerant agents, you have two options
regarding scripts: you can use centralized or non-centralized scripts.

If all of the tracker agent JCL (script) is placed in the Tivoli Workload Scheduler
for z/OS controller job library, the easiest and most simple solution when
migrating to end-to-end will be to use centralized scripts.

But if all of the tracker agent JCL (script) is placed locally on the tracker agent
systems, the easiest and most simple solution when migrating to end-to-end will
be to use non-centralized scripts.

Finally, if the tracker agent JCLs are both placed in the Tivoli Workload Scheduler
for z/OS controller job library and locally on the tracker agent system, the easiest
will be to migrate to end-to-end scheduling where a combination of centralized
and non-centralized script is used.



                          Chapter 5. End-to-end implementation scenarios and examples        281
Use of the Job Migration Tool to help with the migration
                This tool can be used to help analyze the existing tracker agent environment to
                be able to decide whether the tracker agent JCL should be migrated using
                centralized script, non-centralized script, or a combination.

                To run the tool, select option 1.1.5 from the main menu in Tivoli Workload
                Scheduler for z/OS legacy ISPF. In the panel, enter the name for the tracker
                agent workstation that you would like to analyze and submit the job generated by
                Tivoli Workload Scheduler for z/OS.

                  Note: Before submitting the job, modify it by adding all JOBLIBs for the tracker
                  agent workstation that you are analyzing. Also remember to add JOBLIBs
                  processed by the job-library-read exit (EQQUX002) if it is used.

                  For a permanent change of the sample job, modify the sample migration job
                  skeleton, EQQWMIGZ.

                The tool analyzes the operations (jobs) that are defined on the specified
                workstation and generates output in four data sets:
                1. Report data set (default suffix: LIST)
                    Contains warning messages for the processed jobs on the workstation
                    specified as input to the tool. (See Example 5-6 on page 283.) There will be
                    warning messages for:
                    – Operations (jobs) that are associated with a job library member that uses
                      JCL variables and directives and that have the centralized script option set
                      to N (No).
                    – Scripts (JCL) that do not have variables and are associated with
                      operations that have the centralized script option set to Y (Yes). (This
                      situation lowers performance.)
                    – Operations (jobs) for which the tool did not find the JCL (member not
                      found) in the JOBLIB libraries specified as input to the tool defined in Tivoli
                      Workload Scheduler.

                      Important: Check the tool report for warning messages.

                    For jobs (operations) defined with centralized script option set to No (the
                    default), the tool suggests defining the job on a workstation named DIST. For
                    jobs (operations) defined with centralized script option set to Yes, the tool
                    suggests defining the job on a workstation named CENT.
                    The last part of the report contains a cross-reference that shows which
                    application (job stream) the job (operation) is defined in.


282   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
The report is a good starting point for an overview of the migration effort.

                         Note: The NT01JOB1 operation (job) is defined in two different
                         applications (job streams): NT01HOUSEKEEPING and NT01TESTAPPL.
                         The NT01JOB1 operation is defined with centralized script option set to
                         Yes in the NT01TESTAPPL application and No in the
                         NT01HOUSEKEEPING application. That is why the JT01JOB1 is defined
                         on both the CENT and the DIST workstations.

                  2. JOBLIB data set (default suffix: JOBLIB)
                        This library contains a copy of all detected jobs (members) for a specific
                        workstation. The job is copied from the JOBLIB.
                        In our example (Example 5-6), there are four jobs in this library: NT01AV01,
                        NT01AV02, NT01JOB1, and NT01JOB2.
                  3. JOBCEN data set (default suffix: JOBCEN)
                        This library contains a copy of all jobs (members) that have centralized scripts
                        for a specific workstation that is defined with the centralized script option set
                        to Yes. The job is copied from the JOBLIB.
                        In our example, (Example 5-6), there are two jobs in this library: NT01JOB1
                        and NT01JOB2. These jobs were defined in Tivoli Workload Scheduler for
                        z/OS with the centralized script option set to Yes.
                  4. JOBDIS data set (default suffix: JOBDIS).
                        This library contains all jobs (members) that do not have centralized scripts
                        for a specific workstation. These jobs must be transferred to the fault-tolerant
                        workstation.
                        In our example (Example 5-6), there are three jobs in this library: NT01AV01,
                        NT01AV02, and NT01JOB1. These jobs were defined in Tivoli Workload
                        Scheduler for z/OS with the centralized script option set to No (the default).
Example 5-6 Report generated by the Job Migration Tool
P R I N T O U T   O F   W O R K   S T A T I O N   D E S C R I P T I O N S
                        = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
REPORT TYPE: CROSS-REFERENCE OF JOBNAMES AND ACTIVE APPLICATIONS
================================================================
JOBNAME APPL ID           VALID TO OpTYPE_OpNUMBER
-------- ---------------- -------- --------------------------------------------------------------

NT01AV01 NT01HOUSEKEEPING 31/12/71 DIST_005
         NT01TESTAPPL2    31/12/71 DIST_005
NT01AV02 NT01TESTAPPL2    31/12/71 DIST_010
NT01AV03 NT01TESTAPPL2    31/12/71 DIST_015
  WARNING: NT01AV03 member not found in job library
NT01JOB1 NT01HOUSEKEEPING 31/12/71 DIST_010



                                            Chapter 5. End-to-end implementation scenarios and examples   283
NT01TESTAPPL     31/12/71 CENT_005
  WARNING: Member NT01JOB1 contain directives (//*%OPC) or variables (& or % or ?).
           Modify the member manually or change the operation(s) type to centralized.
NT01JOB2 NT01TESTAPPL     31/12/71 CENT_010
  WARNING: You could change operation(s) to NON centralized type.


APPL ID          VALID TO JOBNAME   OpTYPE_OpNUMBER
---------------- -------- --------- --------------------------------------------------------------
NT01HOUSEKEEPING 31/12/71 NT01AV01 DIST_005
                          NT01JOB1 DIST_010
NT01TESTAPPL     31/12/71 NT01JOB1 CENT_005
                          NT01JOB2 CENT_010
NT01TESTAPPL2    31/12/71 NT01AV01 DIST_005
                          NT01AV02 DIST_010
                          NT01AV03 DIST_015
                               >>>>>>> END OF APPLICATION DESCRIPTION PRINTOUT <<<<<<<


                  Before you migrate the tracker agent to a distributed agent, you should use this
                  tool to obtain these files for help in deciding whether the jobs should be defined
                  with centralized or decentralized scripts.

                  Define the centralized script
                  If you decide to use centralized script for all or some of the tracker agent jobs, do
                  the following:
                  1. Run the job migration tool for each tracker agent workstation and analyze the
                     generated report.
                  2. Change the value of the centralized script flag to Yes, based on the result of
                     the job migration tool output and your decision.
                  3. Run the job migration tool as many times as you want. For example, you can
                     run until there are no warning messages and all jobs are defined on the
                     correct workstation in the report (the CENT workstation).
                  4. Change the generated JCL (jobs) in the JOBCEN data set (created by the
                     migration tool); for example, it could be necessary to change the comments
                     line from //* to //* OPC.

                        Note: If you plan to run the migration tool several times, you should copy
                        the job to another library when it has been changed and is ready for the
                        switch to avoid it being replaced by a new run of the migration tool.

                  5. The copied and amended members (jobs) can be activated one by one when
                     the corresponding operation in the Tivoli Workload Scheduler for z/OS
                     application is changed from the tracker agent workstation to the fault-tolerant
                     workstation.



284     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Define the non-centralized script
If you decide to use non-centralized script for all or some of the tracker agent
jobs, do the following:
1. Run the job migration tool for each tracker agent workstation and analyze the
   generated report.
2. Run the job migration tool as many times as you want. For example, you can
   run until there are no warning messages and all jobs are defined on the
   correct workstation in the report (the DIST workstation).
3. Transfer the scripts from the JOBDIS data set (created by the migration tool)
   to the distributed agents.
4. Create a member in the script library (SCRPTLIB/EQQSCLIB) for every job in
   the JOBDIS data set and, optionally, for the jobs in JOBCEN (if you decide to
   change these jobs from use of centralized script to use of non-centralized
   script).

    Note: The job submit exit EQQUX001 is not called for non-centralized
    script jobs.


Define the user and password for Windows FTWs
For each user running jobs on Windows fault-tolerant agents, define a new
USRREC statement to provide the Windows user and password. USRREC is
defined in the member of the EQQPARM library as specified by the USERMEM
keyword in the TOPOLOGY statement.

 Important: Because the passwords are not encrypted, we strongly
 recommend that you protect the data set containing the USRREC definitions
 with your security product.

If you use the impersonation support for NT tracker agent workstations, it does
not interfere with the USRREC definitions. The impersonation support assigns a
user ID based on the user ID from exit EQQUX001. Since the exit is not called for
jobs with non-centralized script, impersonation support is not used any more.

Change the workstation name inside the job streams
At this point in the migration, the end-to-end scheduling environment should be
active and the fault-tolerant workstations on the systems with tracker agents
should be active and linked in the plan in Tivoli Workload Scheduler for z/OS.

The plan in Tivoli Workload Scheduler for z/OS and the Symphony file on the
fault-tolerant agents does not contain any job streams with jobs that are
scheduled on the tracker agent workstations.


                       Chapter 5. End-to-end implementation scenarios and examples   285
The job streams (applications) in the Tivoli Workload Scheduler for z/OS
                controller are still pointing to the tracker agent workstations.

                In order to submit workload to the distributed environment, you must change the
                workstation name of your existing job definitions to the new FTW, or define new
                job streams to replace the job streams with the old tracker agent jobs.

                  Notes:
                      It is not possible to change the workstation within a job instance from
                      tracker agent to a fault-tolerant workstation via the Job Scheduling
                      Console. We have already addressed this issue of development. The
                      change can be performed via the legacy GUI (ISPF) and the batch loader
                      program.
                      Be aware that changes to the workstation affect only the job stream
                      database. If you want to take this modification into the plans, you must run
                      a long term-plan (LTP) modify all batch job and current plan extend or
                      replan batch job.
                      The highest acceptable return code for operations on fault-tolerant
                      workstations is 0. If you have a tracker agent operation with highest return
                      code set to 8 and you change the workstation for this operation from a
                      tracker agent workstation to a fault-tolerant workstation, you will not be able
                      to save the modified application.
                      When trying to save the application you will see error message:
                          EQQA531E Inconsistent option when FT work station
                      Be aware of this if you are planning to use Tivoli Workload Scheduler for
                      z/OS mass update functions or unload/reload functions to update a large
                      number of applications.


                Parallel testing
                If possible, do some parallel testing before the cutover. With parallel testing, you
                run the same job flow on both types of workstations: tracker agent workstations
                and fault-tolerant workstations.

                The only problem with parallel testing is that it requires duplicate versions of the
                applications (job streams): one application for the tracker agent and one
                application for the fault-tolerant workstation. Also, you cannot run the same job in
                both applications, so one of the jobs must be changed to a dummy job.

                Some initial setup is required to do parallel testing, but when done it will be
                possible to verify that the jobs are executed in same sequence and that operators
                and planners can gain some experience with the new environment.



286   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Another approach could be to migrate a few applications from tracker agents to
fault-tolerant agents and use these applications to verify the migration strategy,
the migrated jobs (JCL/script), and get some experience. When you are satisfied
with the test result of these applications, the next step is to migrate the rest of the
applications.

Perform the cutover
When the parallel testing has been completed with satisfactory results, you can
do the final cutover. For example, the process can be:
   Change all workstation names from tracker agent workstation to fault-tolerant
   workstation for all operations in the Tivoli Workload Scheduler for z/OS
   controller.
   This can be done with the Tivoli Workload Scheduler for z/OS mass update
   function or by the unload (with the Batch Command Interface Tool) edit, and
   batchload (with Tivoli Workload Scheduler for z/OS batchloader) process.
   Run the Extend of long-term plan batch job or Modify All of long-term plan in
   Tivoli Workload Scheduler for z/OS.
   Verify that the changed applications and operations look correct in the
   long-term plan.
   Run the Extend of plan (current plan) batch job.
   – Verify that the changed applications and operations look correct in the
     plan.
   – Verify that the tracker agent jobs have been moved to the new
     fault-tolerant workstations and that there are no jobs on the tracker agent
     workstations.

Education and training of operators and planners
Tracker agents and fault-tolerant workstations work differently and there are new
options related to jobs on fault-tolerant workstations. Handling of fault-tolerant
workstations is different from handling for tracker agent.

A tracker agent workstation can be set to Active or Offline and can be defined
with open intervals and servers. A fault-tolerant workstation can be started,
stopped, linked, or unlinked.

To ensure that the migration from tracker agents to fault-tolerant workstations will
be successful, be sure to plan for education of your planners and operators.




                        Chapter 5. End-to-end implementation scenarios and examples   287
5.3.5 Migrating backward
                Normally, it should not be necessary to migrate backward because it is possible
                to run the two environments in parallel. As we have shown, you can run a tracker
                agent and a fault-tolerant agent on the same physical machine. If the
                preparation, planning, and testing of the migration is done as described in the
                previous chapters, it should not be necessary to migrate backward.

                If a situation forces backward migration from the fault-tolerant workstations to
                tracker agents, follow these steps:
                1. Install the tracker agent on the computer. (This is necessary only if you have
                   uninstalled the tracker agent.)
                2. Define a new destination in the ROUTOPTS initialization statement of the
                   controller and restart the controller.
                3. Make a duplicate of the workstation definition of the computer. Define the new
                   workstation as Computer Automatic instead of Fault Tolerant and specify the
                   destination you defined in step 2. This way, the same computer can be run as
                   a fault-tolerant workstation and as a tracker agent for smoother migration.
                4. For non-centralized scripts, copy the scripts from the fault-tolerant workstation
                   repository to the JOBLIB. As an alternative, copy the script to a local directory
                   that can be accessed by the tracker agent and create a JOBLIB member to
                   execute the script. You can accomplish this by using FTP.
                5. Implement the EQQUX001 sample to execute jobs with the correct user ID.
                6. Modify the workstation name inside the operation. Remember to change the
                   JOBNAME if the member in the JOBLIB has a name different from the
                   member of the script library.



5.4 Conversion from Tivoli Workload Scheduler network
to Tivoli Workload Scheduler for z/OS managed network
                In this section, we outline the guidelines for converting a Tivoli Workload
                Scheduler network to a Tivoli Workload Scheduler for z/OS managed network.

                The distributed Tivoli Workload Scheduler network is managed by a Tivoli
                Workload Scheduler master domain manager, which manages the databases
                and the plan. Converting the Tivoli Workload Scheduler managed network to a
                Tivoli Workload Scheduler for z/OS managed network means that responsibility
                for database and plan management move from the Tivoli Workload Scheduler
                master domain manager to the Tivoli Workload Scheduler for z/OS engine.




288   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
5.4.1 Illustration of the conversion process
           Figure 5-5 shows a distributed Tivoli Workload Scheduler network. The database
           management and daily planning are carried out by the Tivoli Workload Scheduler
           master domain manager.


            MASTERDM

                                                         AIX
                                    Master
                                   Domain
                                   Manager




            DomainA                                                          DomainB
                                  AIX
                                                                              HPUX
                      Domain                                   Domain
                      Manager                                  Manager
                       DMA                                      DMB




               FTA1             FTA2                   FTA3                 FTA4

                       AIX              OS/400             Windows 2000            Solaris

           Figure 5-5 Tivoli Workload Scheduler distributed network with a master domain manager

           Figure 5-6 shows a Tivoli Workload Scheduler for z/OS managed network.
           Database management and daily planning are carried out by the Tivoli Workload
           Scheduler for z/OS engine.




                                   Chapter 5. End-to-end implementation scenarios and examples   289
Standby                               Standby
                                  Engine                                Engine

                                                            z/OS
                                                          SYSPLEX




                                                           Active
                                                           Engine


                Figure 5-6 Tivoli Workload Scheduler for z/OS network

                The conversion process is to change the Tivoli Workload Scheduler master
                domain manager to the first-level domain manager and then connect it to the
                Tivoli Workload Scheduler for z/OS engine (new master domain manager). The
                result of the conversion is a new end-to-end network managed by the Tivoli
                Workload Scheduler for z/OS engine (Figure 5-7 on page 291).




290   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
MASTERDM

                        Standby                                            Standby
                       Master DM                                          Master DM

                                                    z/OS
                                                   Sysplex


                                      Master
                                     Domain
                                     Manager




            DomainZ
                                                             AIX
                                      Domain
                                      Manager
                                       DMZ




            DomainA                                                                   DomainB
                                    AIX                                               HPUX
                     Domain                                          Domain
                     Manager                                         Manager
                      DMA                                             DMB




              FTA1                 FTA2                      FTA3                     FTA4

                      AIX                 OS/400                   Windows 2000              Solaris


           Figure 5-7 IBM Tivoli Workload Scheduler for z/OS managed end-to-end network


5.4.2 Considerations before doing the conversion
           Before you start to convert your Tivoli Workload Scheduler managed network to
           a Tivoli Workload Scheduler for z/OS managed network, you should evaluate the
           positives and negatives of doing the conversion.

           The pros and cons of doing the conversion will differ from installation to
           installation. Some installations will gain significant benefits from conversion,
           while other installations will gain fewer benefits. Based on the outcome of the
           evaluation of pros and cons, it should be possible to make the right decision for
           your specific installation and current usage of Tivoli Workload Scheduler as well
           as Tivoli Workload Scheduler for z/OS.




                                          Chapter 5. End-to-end implementation scenarios and examples   291
Some important aspects of the conversion that you should consider are:
                    How is your Tivoli Workload Scheduler and Tivoli Workload Scheduler for
                    z/OS organization today?
                    – Do you have two independent organizations working independently of
                      each other?
                    – Do you have two groups of operators and planners to manage Tivoli
                      Workload Scheduler and Tivoli Workload Scheduler for z/OS?
                    – Or do you have one group of operators and planners that manages both
                      the Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS
                      environments?
                    – Do you use considerable resources keeping a high skill level for both
                      products, Tivoli Workload Scheduler and Tivoli Workload Scheduler for
                      z/OS?
                    How integrated is the workload managed by Tivoli Workload Scheduler and
                    Tivoli Workload Scheduler for z/OS?
                    – Do you have dependencies between jobs in Tivoli Workload Scheduler
                      and in Tivoli Workload Scheduler for z/OS?
                    – Or do most of the jobs in one workload scheduler run independently of
                      jobs in the other scheduler?
                    – Have you already managed to solve dependencies between jobs in Tivoli
                      Workload Scheduler and in Tivoli Workload Scheduler for z/OS efficiently?
                    The current use of Tivoli Workload Scheduler–specific functions are not
                    available in Tivoli Workload Scheduler for z/OS.
                    – How intensive is the use of prompts, file dependencies, and “repeat range”
                      (run job every 10 minutes) in Tivoli Workload Scheduler?
                        Can these Tivoli Workload Scheduler–specific functions be replaced by
                        Tivoli Workload Scheduler for z/OS–specific functions or should they be
                        handled in another way?
                        Does it require some locally developed tools, programs, or workarounds?
                    – How extensive is the use of Tivoli Workload Scheduler job recovery
                      definitions?
                        Is it possible to handle these Tivoli Workload Scheduler recovery
                        definitions in another way when the job is managed by Tivoli Workload
                        Scheduler for z/OS?
                        Does it require some locally developed tools, programs, or workarounds?




292   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Will Tivoli Workload Scheduler for z/OS give you some of the functions you
              are missing in Tivoli Workload Scheduler today?
              – Extended planning capabilities, long-term plan, current plan that spans
                more than 24 hours?
              – Better handling of carry-forward job streams?
              – Powerful run-cycle and calendar functions?
              Which platforms or systems are going to be managed by the Tivoli Workload
              Scheduler for z/OS end-to-end scheduling?
              What kind of integration do you have between Tivoli Workload Scheduler and,
              for example, SAP R/3, PeopleSoft, or Oracle Applications?
              Partial conversion of some jobs from the Tivoli Workload Scheduler-managed
              network to the Tivoli Workload Scheduler for z/OS managed network?

               Partial conversion: About 15% of your Tivoli Workload
               Scheduler–managed jobs or workload is directly related to the Tivoli
               Workload Scheduler for z/OS jobs or workload. This means that the Tivoli
               Workload Scheduler jobs are either predecessors or successors to Tivoli
               Workload Scheduler for z/OS jobs. The current handling of these
               interdependencies is not effective or stable with your current solution.

               Converting the 15% of jobs to Tivoli Workload Scheduler for z/OS
               managed scheduling using the end-to-end solution will stabilize
               dependency handling and make scheduling more reliable. Note that this
               requires two instances of Tivoli Workload Scheduler workstations (one
               each for Tivoli Workload Scheduler and Tivoli Workload Scheduler for
               z/OS).

              Effort to convert Tivoli Workload Scheduler database object definitions to
              Tivoli Workload Scheduler for z/OS database object definitions.
              Will it be possible to convert the database objects with reasonable resources
              and within a reasonable time frame?


5.4.3 Conversion process from Tivoli Workload Scheduler to Tivoli
Workload Scheduler for z/OS
           The process of converting from a Tivoli Workload Scheduler-managed network to
           a Tivoli Workload Scheduler for z/OS-managed network has several steps. In the
           following description, we assume that we have an active Tivoli Workload
           Scheduler for z/OS environment as well as an active Tivoli Workload Scheduler
           environment. We also assume that the Tivoli Workload Scheduler for z/OS



                                 Chapter 5. End-to-end implementation scenarios and examples   293
end-to-end server is installed and ready for use. The conversion process mainly
                contains the following steps or tasks:
                1. Plan the conversion and establish new naming standards.
                2. Install new Tivoli Workload Scheduler workstation instances dedicated to
                   communicating with the Tivoli Workload Scheduler for z/OS server.
                3. Define the topology of the Tivoli Workload Scheduler network in Tivoli
                   Workload Scheduler for z/OS and define associated Tivoli Workload
                   Scheduler for z/OS fault-tolerant workstations.
                4. Create JOBSCR members (in the SCRPTLIB data set) for all Tivoli Workload
                   Scheduler–managed jobs that should be converted.
                5. Convert the database objects from Tivoli Workload Scheduler format to Tivoli
                   Workload Scheduler for z/OS format.
                6. Educate planners and operators in the new Tivoli Workload Scheduler for
                   z/OS server functions.
                7. Test and verify the conversion and finalize for production.

                The sequencing of these steps may be different in your environment, depending
                on the strategy that you will follow when doing your own conversion.

                Step 1. Planning the conversion
                The conversion from Tivoli Workload Scheduler-managed scheduling to Tivoli
                Workload Scheduler for z/OS-managed scheduling can be a major project and
                requires several resources. It depends on the current size and usage of the Tivoli
                Workload Scheduler environment. Planning of the conversion is an important
                task and can be used to estimate the effort required to do the conversion as well
                as detail the different conversion steps.

                In the planning phase you should try to identify special usage of Tivoli Workload
                Scheduler functions or facilities that are not easily converted to Tivoli Workload
                Scheduler for z/OS. Furthermore, you should try to outline how these functions or
                facilities should be handled when scheduling is done by Tivoli Workload
                Scheduler for z/OS.

                Part of planning is also establishing the new naming standards for all or some of
                the Tivoli Workload Scheduler objects that are going to be converted. Some
                examples:
                    Naming standards for the fault-tolerant workstations in Tivoli Workload
                    Scheduler for z/OS
                    Names for workstations can be up to 16 characters in Tivoli Workload
                    Scheduler (if you are using expanded databases). In Tivoli Workload
                    Scheduler for z/OS, workstation names can be up to four characters. This



294   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
means you have to establish a new naming standard for the fault-tolerant
   workstations in Tivoli Workload Scheduler for z/OS.
   Naming standards for job names
   In Tivoli Workload Scheduler you can specify job names with lengths of up to
   40 characters (if you are using expanded databases). In Tivoli Workload
   Scheduler for z/OS, job names can be up to eight characters. This means that
   you have to establish a new naming standard for jobs on fault-tolerant
   workstations in Tivoli Workload Scheduler for z/OS.
   Adoption of the existing Tivoli Workload Scheduler for z/OS object naming
   standards
   You probably already have naming standards for job streams, workstations,
   job names, resources, and calendars in Tivoli Workload Scheduler for z/OS.
   When converting Tivoli Workload Scheduler database objects to the Tivoli
   Workload Scheduler for z/OS databases, you must adopt the Tivoli Workload
   Scheduler for z/OS naming standard.
   Access to the objects in Tivoli Workload Scheduler for z/OS database and
   plan
   Access to Tivoli Workload Scheduler for z/OS databases and plan objects are
   protected by your security product (for example, RACF). Depending on the
   naming standards for the imported Tivoli Workload Scheduler objects, you
   may need to modify the definitions in your security product.
   Is the current Tivoli Workload Scheduler network topology suitable and can it
   be implemented directly in a Tivoli Workload Scheduler for z/OS server?
   Maybe the current Tivoli Workload Scheduler network topology needs some
   adjustments as it is implemented today to be optimal. If your Tivoli Workload
   Scheduler network topology is not optimal, it should be reconfigured when
   implemented in Tivoli Workload Scheduler for z/OS end-to-end.

Step 2. Install Tivoli Workload Scheduler workstation
instances for Tivoli Workload Scheduler for z/OS
With Tivoli Workload Scheduler workstation instances we mean installation and
configuration of a new Tivoli Workload Scheduler engine. This engine should be
configured to be a domain manager, fault-tolerant agent, or a backup domain
manager, according to the Tivoli Workload Scheduler production environment
you are going to mirror. Following this approach, you will have two instances on
all the Tivoli Workload Scheduler managed systems:
1. One old Tivoli Workload Scheduler workstation instance dedicated to the
   Tivoli Workload Scheduler master.




                      Chapter 5. End-to-end implementation scenarios and examples   295
2. One new Tivoli Workload Scheduler workstation instance dedicated to the
                   Tivoli Workload Scheduler for z/OS engine (master). Remember to use
                   different port numbers.

                By creating dedicated Tivoli Workload Scheduler workstation instances for Tivoli
                Workload Scheduler for z/OS scheduling, you can start testing the new
                environment without disturbing the distributed Tivoli Workload Scheduler
                production environment. This also makes it possible to do partial conversion,
                testing, and verification without interfering with the Tivoli Workload Scheduler
                production environment.

                You can chose different approaches for the conversion:
                    Try to group your Tivoli Workload Scheduler job streams and jobs into logical
                    and isolated groups and then convert them, group by group.
                    Convert all job streams and jobs, run some parallel testing and verification,
                    and then do the switch from Tivoli Workload Scheduler–managed scheduling
                    to Tivoli Workload Scheduler for z/OS–managed scheduling in one final step.

                The suitable approach differs from installation to installation. Some installations
                will be able to group job streams and jobs into isolated groups, while others will
                not. You have to decide the strategy for the conversion based on your installation.

                  Note: If you decide to reuse the Tivoli Workload Scheduler distributed
                  workstation instances in your Tivoli Workload Scheduler for z/OS managed
                  network, this is also possible. You may decide to move the distributed
                  workstations one by one (depending on how you have grouped your job
                  streams and how you are doing the conversion). When a workstation is going
                  to be moved to Tivoli Workload Scheduler for z/OS, you simply change the
                  port number in the localops file on the Tivoli Workload Scheduler workstation.
                  The workstation will then be active in Tivoli Workload Scheduler for z/OS at
                  the next plan extension, replan, or redistribution of the Symphony file.
                  (Remember to create the associated DOMREC and CPUREC definitions in
                  the Tivoli Workload Scheduler for z/OS initialization statements.)


                Step 3. Define topology of Tivoli Workload Scheduler network
                in Tivoli Workload Scheduler for z/OS
                The topology for your Tivoli Workload Scheduler distributed network can be
                implemented directly in Tivoli Workload Scheduler for z/OS. This is done by
                creating the associated DOMREC and CPUREC definitions in the Tivoli
                Workload Scheduler for z/OS initialization statements.

                To activate the topology definitions, create the associated definitions for
                fault-tolerant workstations in the Tivoli Workload Scheduler for z/OS workstation


296   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
database. Tivoli Workload Scheduler for z/OS extend or replan will activate these
new workstation definitions.

If you are using a dedicated Tivoli Workload Scheduler workstation for Tivoli
Workload Scheduler for z/OS scheduling, you can create the topology definitions
in an early stage of the conversion process. This way you can:
   Verify that the topology definitions are correct in Tivoli Workload Scheduler for
   z/OS.
   Verify that the dedicated fault-tolerant workstations are linked and available.
   Start getting some experience with the management of fault-tolerant
   workstations and a distributed Tivoli Workload Scheduler network.
   Implement monitoring and handling routines in your automation application
   on z/OS.

Step 4. Create JOBSCR members for all Tivoli Workload
Scheduler–managed jobs
Tivoli Workload Scheduler-managed jobs that should be converted to Tivoli
Workload Scheduler for z/OS must be defined in the SCRPTLIB data set. For
every active job defined in the Tivoli Workload Scheduler database, you define a
member in the SCRPTLIB data set containing:
   Name of the script or command for the job (defined in the JOBREC
   JOBSCRS() or the JOBREC JOBCMD specification)
   Name of the user ID that the job should execute under (defined in the
   JOBREC JOBUSR() specification)

 Note: If the same job script is going to be executed on several systems (it is
 defined on several workstations in Tivoli Workload Scheduler), you only have
 to create one member in the SCRPTLIB data set. This job (member) can be
 defined on several fault-tolerant workstations in several job streams in Tivoli
 Workload Scheduler for z/OS. It requires that the script is placed in a common
 directory (path) across all systems.


Step 5. Convert database objects from Tivoli Workload
Scheduler to Tivoli Workload Scheduler for z/OS
Tivoli Workload Scheduler database objects that should be converted — job
streams, resources, and calendars — probably cannot be converted directly to
Tivoli Workload Scheduler for z/OS. In this case you must amend the Tivoli
Workload Scheduler database objects to Tivoli Workload Scheduler for z/OS




                       Chapter 5. End-to-end implementation scenarios and examples   297
format and create the corresponding objects in the respective Tivoli Workload
                Scheduler for z/OS databases.

                Pay special attention to object definitions such as:
                    Job stream run-cycles for job streams and use of calendars in Tivoli Workload
                    Scheduler
                    Use of local (workstation-specific) resources in Tivoli Workload Scheduler
                    (local resources converted to global resources by the Tivoli Workload
                    Scheduler for z/OS master)
                    Jobs defined with “repeat range” (for example, run every 10 minutes in job
                    streams)
                    Job steams defined with dependencies on job stream level
                    Jobs defined with Tivoli Workload Scheduler recovery actions

                For these object definitions, you have to design alternative ways of handling for
                Tivoli Workload Scheduler for z/OS.

                Step 6. Education for planners and operators
                Some of the handling of distributed Tivoli Workload Scheduler jobs in Tivoli
                Workload Scheduler for z/OS will be different from the handling in Tivoli
                Workload Scheduler. Also, some specific fault-tolerant workstation features will
                be available in Tivoli Workload Scheduler for z/OS.

                You should plan for the education of your operators and planners so that they
                have knowledge of:
                    How to define jobs and job streams for the Tivoli Workload Scheduler
                    fault-tolerant workstations
                    Specific rules to be followed for scheduling objects related to fault-tolerant
                    workstations
                    How to handle jobs and job streams on fault-tolerant workstations
                    How to handle resources for fault-tolerant workstations
                    The implications of doing, for example, Symphony redistribution
                    How Tivoli Workload Scheduler for z/OS end-to-end scheduling works
                    (engine, server, domain managers)
                    How the Tivoli Workload Scheduler network topology has been adopted in
                    Tivoli Workload Scheduler for z/OS




298   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Step 7. Test and verify conversion and finalize for production
           After testing your approach for the conversion, doing some trial conversions, and
           testing the conversion carefully, it is time to do the final conversion to Tivoli
           Workload Scheduler for z/OS.

           The goal is to reach this final conversion and switch from Tivoli Workload
           Scheduler scheduling to Tivoli Workload Scheduler for z/OS scheduling within a
           reasonable time frame and with a reasonable level of errors. If the period when
           you are running Tivoli Workload Scheduler and Tivoli Workload Scheduler for
           z/OS will be too long, your planners and operators must handle two
           environments in this period. This is not effective and can cause some frustration
           for both planners and operators.

           The key to a successful conversion is good planning, testing, and verification.
           When you are comfortable with the testing and verification it is safe to do the final
           conversion and finalize for production.

           Tivoli Workload Scheduler for z/OS will then handle the central and the
           distributed workload, and you will have one focal point for your workload. The
           converted Tivoli Workload Scheduler production environment can be stopped.


5.4.4 Some guidelines to automate the conversion process
           If you have a large Tivoli Workload Scheduler scheduling environment, doing
           manual conversion will be too time-consuming. In this case you should consider
           trying to automate some or all of the conversion from Tivoli Workload Scheduler
           to Tivoli Workload Scheduler for z/OS.

           One obvious place to automate is the conversion of Tivoli Workload Scheduler
           database objects to Tivoli Workload Scheduler for z/OS database objects.
           Although this is not a trivial task, some automation can be implemented.
           Automation requires some locally developed tools or programs to handle
           conversion of the database objects.

           Some guidelines to helping automate the conversion process are:
              Create text copies of all the Tivoli Workload Scheduler database objects by
              using the composer create command (Example 5-7).
           Example 5-7 Tivoli Workload Scheduler objects creation
           composer   create   calendars.txt      from   CALENDARS
           composer   create   workstations.txt   from   CPU=@
           composer   create   jobdef.txt         from   JOBS=@#@
           composer   create   jobstream.txt      from   SCHED=@#@
           composer   create   parameter.txt      from   PARMS
           composer   create   resources.txt      from   RESOURCES



                                     Chapter 5. End-to-end implementation scenarios and examples   299
composer create prompts.txt             from PROMPTS
                composer create users.txt               from USERS=@#@

                    These text files are a good starting point when trying to estimate the effort
                    and time for conversion from Tivoli Workload Scheduler to Tivoli Workload
                    Scheduler for z/OS.
                    Use the workstations.txt file when creating the topology definitions (DOMREC
                    and CPUREC) in Tivoli Workload Scheduler for z/OS.
                    Creating the topology definitions in Tivoli Workload Scheduler for z/OS based
                    on the workstations.txt file is quite straightforward. The task can be automated
                    coding using a program (script or REXX) that reads the worksations.txt file
                    and converts the definitions to DOMREC and CPUREC specifications.

                      Restriction: Tivoli Workload Scheduler CPU class definitions cannot be
                      converted directly to similar definitions in Tivoli Workload Scheduler for
                      z/OS.

                    Use the jobdef.txt file when creating the SCRPTLIB members.
                    In jobdef.txt, you have the workstation name for the script (used in the job
                    stream definition), the script name (goes to the JOBREC JOBSCR()
                    definiton), the stream logon (goes to the JOBREC JOBUSR() definition), the
                    description (can be added as comments in the SCRPTLIB member), and the
                    recovery definition.
                    The recovery definition needs special consideration because it cannot be
                    converted to Tivoli Workload Scheduler for z/OS auto-recovery. Here you
                    need to make some workarounds. Use of Tivoli Workload Scheduler CPU
                    class definitions need special consideration. The job definitions using CPU
                    classes probably have to be copied to separate workstation-specific job
                    definitions in Tivoli Workload Scheduler for z/OS. The task can be automated
                    coding using a program (scripts or REXX) that reads the jobdef.txt file and
                    converts each job definition to a member in the SCRPTLIB. If you have many
                    Tivoli Workload Scheduler job definitions, having a program that can help
                    automate this task can save a considerable amount of time.
                    The users.txt file (if you have Windows NT/2000 jobs) is converted to
                    USRREC initialization statements on Tivoli Workload Scheduler for z/OS.
                    Be aware that the password for the user IDs is encrypted in the users.txt file,
                    so you cannot automate the conversion right away. You must get the
                    password as it is defined on the Windows workstations and type it in the
                    USRREC USRPSW() definition.
                    The jobstream.txt file is used to generate corresponding job streams in Tivoli
                    Workload Scheduler for z/OS. The calendars.txt file is used in connection with



300   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
the jobstream.txt file when generating run cycles for the job streams in Tivoli
Workload Scheduler for z/OS. It could be necessary to create additional
calendars in Tivoli Workload Scheduler for z/OS.
When doing the conversion, you should notice that:
– Some of the Tivoli Workload Scheduler job stream definitions cannot be
  converted directly to Tivoli Workload Scheduler for z/OS job stream
  definitions (for example: prompts, workstation specific resources, file
  dependencies, and jobs with repeat range).
   For these definitions you must analyze the usage and find other ways to
   implement similar functions when using Tivoli Workload Scheduler for
   z/OS.
– Some of the Tivoli Workload Scheduler job stream definitions must be
  amended to Tivoli Workload Scheduler for z/OS definitions. For example:
   •   Dependencies on job stream level (use dummy start and end jobs in
       Tivoli Workload Scheduler for z/OS for job stream dependencies).
       Note that dependencies are also dependencies to prompts, file
       dependencies, and resources.
   •   Tivoli Workload Scheduler job and job stream priority (0 to 101) must
       be amended to Tivoli Workload Scheduler for z/OS priority (1 to 9).
       Furthermore, priority in Tivoli Workload Scheduler for z/OS is always
       on job stream level. (It is not possible to specify priority on job level.)
   •   Job stream run cycles (and calendars) must be converted to Tivoli
       Workload Scheduler for z/OS run cycles (and calendars).
– Description texts longer than 24 characters are not allowed for job streams
  or jobs in Tivoli Workload Scheduler for z/OS. If you have Tivoli Workload
  Scheduler job streams or jobs with more than 24 characters of description
  text, you should consider adding this text as Tivoli Workload Scheduler for
  z/OS operator instructions.
If you have a large number of Tivoli Workload Scheduler job streams, manual
handling of job streams can be too time-consuming. The task can be
automated to a certain extend coding program (script or REXX).
A good starting point is to code a program that identifies all areas where you
need special consideration or action. Use the output from this program to
estimate the effort of doing the conversion. Further, the output can be used to
identify and group used Tivoli Workload Scheduler functions where special
workarounds must be performed when converting to Tivoli Workload
Scheduler for z/OS.




                     Chapter 5. End-to-end implementation scenarios and examples   301
The program can be further refined to handle the actual conversion,
                    performing the following steps:
                    – Read all of the text files.
                    – Analyze the job stream and job definitions.
                    – Create corresponding Tivoli Workload Scheduler for z/OS job streams with
                      amended run cycles and jobs.
                    – Generate a file with Tivoli Workload Scheduler for z/OS batch loader
                      statements for job streams and jobs (batch loader statements are Tivoli
                      Workload Scheduler for z/OS job stream definitions in a format that can be
                      loaded directly into the Tivoli Workload Scheduler for z/OS databases).
                    The batch loader file can be sent to the z/OS system and used as input to the
                    Tivoli Workload Scheduler for z/OS batch loader program. The Tivoli
                    Workload Scheduler for z/OS batch loader will read the file (data set) and
                    create the job streams and jobs defined in the batch loader statements.
                    The resources.txt file is used to define the corresponding resources in Tivoli
                    Workload Scheduler for z/OS.
                    Remember that local (workstation-specific) resources are not allowed in Tivoli
                    Workload Scheduler for z/OS. This means that the Tivoli Workload Scheduler
                    workstation-specific resources will be converted to global special resources
                    in Tivoli Workload Scheduler for z/OS.
                    The Tivoli Workload Scheduler for z/OS engine is directly involved when
                    resolving a dependency to a global resource. A fault-tolerant workstation job
                    must interact with the Tivoli Workload Scheduler for z/OS engine to resolve a
                    resource dependency. This can jeopardize the fault tolerance in your network.
                    The use of parameters in the parameters.txt file must be analyzed.
                    What are the parameters used for?
                    – Are the parameters used for date calculations?
                    – Are the parameters used to pass information from one job to another job
                      (using the Tivoli Workload Scheduler parms command)?
                    – Are the parameters used as parts of job definitions, for example, to specify
                      where the script is placed?
                    Depending on how you used the Tivoli Workload Scheduler parameters, there
                    will be different approaches when converting to Tivoli Workload Scheduler for
                    z/OS. Unless you use parameters as part of Tivoli Workload Scheduler object
                    definitions, you usually do not have to do any conversion. Parameters will still
                    work after the conversion.
                    You have to copy the parameter database to the home directory of the Tivoli
                    Workload Scheduler fault-tolerant workstations. The parms command can still



302   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
be used locally on the fault-tolerant workstation when managed by Tivoli
            Workload Scheduler for z/OS.
            We will show how to use Tivoli Workload Scheduler parameters in connection
            with Tivoli Workload Scheduler for z/OS JCL variables. This is a way to pass
            values for Tivoli Workload Scheduler for z/OS JCL variables to Tivoli
            Workload Scheduler parameters so that they can be used locally on the
            fault-tolerant workstation.



5.5 Tivoli Workload Scheduler for z/OS end-to-end
fail-over scenarios
         In this section, we describe how to make the Tivoli Workload Scheduler for z/OS
         end-to-end environment fail-safe and plan for system outages. We also show
         some fail-over scenario examples.

         To make your Tivoli Workload Scheduler for z/OS end-to-end environment
         fail-safe, you have to:
            Configure Tivoli Workload Scheduler for z/OS backup engines (also called hot
            standby engines) in your sysplex.
            If you do not run sysplex, but have more than one z/OS system with shared
            DASD, then you should make sure that the Tivoli Workload Scheduler for
            z/OS engine can be moved from one system to another without any problems.
            Configure your z/OS systems to use a virtual IP address (VIPA).
            VIPA is used to make sure that the Tivoli Workload Scheduler for z/OS
            end-to-end server always gets the same IP address no matter which z/OS
            system it is run on. VIPA assigns a system-independent IP address to the
            Tivoli Workload Scheduler for z/OS server task.
            If using VIPA is not an option, you should consider other ways of assigning a
            system-independent IP address to the Tivoli Workload Scheduler for z/OS
            server task. For example, this can be a hostname file, DNS, or stack affinity.
            Configure a backup domain manager for the first-level domain manager.

         Refer to the Tivoli Workload Scheduler for z/OS end-to-end configuration, shown
         in Figure 5-1 on page 266, for the fail-over scenarios.

         When the environment is configured to be fail-safe, the next step is to test that
         the environment actually is fail-safe. We did the following fail-over tests:
            Switch to the Tivoli Workload Scheduler for z/OS backup engine.
            Switch to the Tivoli Workload Scheduler backup domain manager.



                                Chapter 5. End-to-end implementation scenarios and examples   303
5.5.1 Configure Tivoli Workload Scheduler for z/OS backup engines
                To ensure that the Tivoli Workload Scheduler for z/OS engine will be started,
                either as active engine or standby engine, we specify:
                    OPCOPTS OPCHOST(PLEX)

                In the initialization statements for the Tivoli Workload Scheduler for z/OS engine
                (pointed to by the member of the EQQPARM library as specified by the parm
                parameter on the JCL EXEC statement), OPCHOST(PLEX) means that the
                engine has to start as the controlling system. If there already is an active engine
                in the XCF group, the startup for the engine continues on standby engine.

                  Note: OPCOPTS OPCHOST(YES) must be specified if you start the engine with an
                  empty checkpoint data set. This could be the case the first time you start a
                  newly installed engine or after you have migrated from a previous release of
                  Tivoli Workload Scheduler for z/OS.

                OPCHOST(PLEX) is valid only when xcf group and member have been
                specified. Also, this selection requires that Tivoli Workload Scheduler for z/OS is
                running on a z/OS/ESA Version 4 Release 1 or later. Because we are running
                z/OS 1.3 we can use the OPCHOST PLEX(YES) definition. We specify
                Example 5-8 in the xcf group and member definitions for the engine.
                Example 5-8 xcf group and member definitions
                XCFOPTS GROUP(TWS820)
                        MEMBER(TWSC&SYSNAME.)
                /*      TAKEOVER(SYSFAIL,HOSTFAIL)             Do takeover manually !!   */


                  Tip: We use the z/OS sysplex-wide SYSNAME variable when specifying the
                  member name for the engine in the sysplex. Using z/OS variables this way, we
                  can have common Tivoli Workload Scheduler for z/OS parameter member
                  definitions for all our engines (and agents as well).

                  For example, when the engine is started on SC63, the
                  MEMBER(TWSC&SYSNAME) will be MEMBER(TWSCSC63).

                You must have unique member names for all your engines (active and standby)
                running in the same sysplex. We assure this by using the sysname variable.




304   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Tip: We have not activated the TAKEOVER(SYSFAIL,HOSTFAIL) parameter
            in XCFOPTS because we do not want the engine to switch automatically to
            one of its backup engines in case the active engine fails or the system fails.

            Because we have not specified the TAKEOVER parameter, we are making the
            switch to one of the backup engines manually. The switch is made by issuing
            the following modify command on the z/OS system where you want the
            backup engine to take over:
               F TWSC,TAKEOVER

            In this example, TWSC is the name of our Tivoli Workload Scheduler for z/OS
            backup engine started task (same name on all systems in the sysplex).

            The takeover can be managed by SA/390, for example. This way SA/390 can
            integrate the switch to a backup engine with other automation tasks in the
            engine or on the system.

           We did not define a Tivoli Workload Scheduler for z/OS APPC server task for the
           Tivoli Workload Scheduler for z/OS panels and PIF programs, as described in
           “Remote panels and program interface applications” on page 31, but it is strongly
           recommended that you use a Tivoli Workload Scheduler for z/OS APPC server
           task in sysplex environments where the engine can be moved to different
           systems in the sysplex. If you do not use the Tivoli Workload Scheduler for z/OS
           APPC server task you must log off and then log on to the system where the
           engine is active. This can be avoided by using the Tivoli Workload Scheduler for
           z/OS APPC server task.


5.5.2 Configure DVIPA for Tivoli Workload Scheduler for z/OS
end-to-end server
           To make sure that the engine can be moved from SC64 to either SC63 or SC65,
           Dynamic VIPA is used to define the IP address for the server task. This DVIPA IP
           address is defined in the profile data set pointed to by PROFILE DD-card in the
           TCPIP started task.

           The VIPA definition that is used to define logical sysplex-wide IP addresses for
           the Tivoli Workload Scheduler for z/OS end-to-end server, engine JSC server, is
           as shown in Example 5-9.
           Example 5-9 The VIPA definition
           VIPADYNAMIC
             viparange define 255.255.255.248 9.12.6.104
           ENDVIPADYNAMIC



                                  Chapter 5. End-to-end implementation scenarios and examples   305
PORT
                   424 TCP TWSC             BIND 9.12.6.105
                   5000 TCP TWSCJSC          BIND 9.12.6.106
                   31282 TCP TWSCE2E         BIND 9.12.6.107


                In this example, the first column under PORT is the port number, the third column
                is the name of the started task, and the fifth column is the logical sysplex-wide IP
                address.

                Port 424 is used for the Tivoli Workload Scheduler for z/OS tracker agent IP
                address, port 5000 for the Tivoli Workload Scheduler for z/OS JSC server task,
                and port 31282 is used for the Tivoli Workload Scheduler for z/OS end-to-end
                server task.

                With these VIPA definitions, we have made a relation between port number,
                started task name, and the logical IP address that can be used sysplex-wide.

                The TWSCE2E host name and 31282 port number that is used for the Tivoli
                Workload Scheduler for z/OS end-to-end server is defined in the TOPLOGY
                HOSTNAME(TWSCE2E) initialization statement used by the TWSCE2E server
                and Tivoli Workload Scheduler for z/OS plan programs.

                When the Tivoli Workload Scheduler for z/OS engine creates the Symphony file,
                the TWSCE2E host name and 31282 port number will be part of the Symphony
                file. The first-level domain manager (U100) and the backup domain manager
                (F101) will use this host name when they establish outbound IP connections to
                the Tivoli Workload Scheduler for z/OS server. The backup domain manager only
                establishes outbound IP connections to the Tivoli Workload Scheduler for z/OS
                server if it is going to take over the responsibilities for the first-level domain
                manager.


5.5.3 Configure backup domain manager for first-level domain manager

                  Note: The examples and text below refer to a different end-to-end scheduling
                  network, so the names of workstations are different than in the rest of the
                  redbook. This section is included here mostly unchanged from End-to-End
                  Scheduling with Tivoli Workload Scheduler 8.1, SG24-6022, because the
                  steps to switch to a backup domain manager are the same in Version 8.2 as
                  they were in Version 8.1.

                  One additional option that is available with Tivoli Workload Scheduler 8.2 is to
                  use the WSSTAT command instead of the Job Scheduling Console to do the
                  switch (from backup domain manager to first-level domain manager). This
                  method is also shown in this scenario, in addition to the GUI method.


306   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
In this section, we show how to configure a backup domain manager for a
first-level domain manager. In this scenario, we have F100 FTA configured as the
first-level domain manager and F101 FTA configured as the first-level domain
manager. The initial DOCREC definitions in Example 5-10 show that F100 (in
bold) is defined as the first-level domain manager.
Example 5-10 DOMREC definitions
/**********************************************************************/
/* DOMREC: Defines the domains in the distributed Tivoli Workload     */
/*         Scheduler network                                          */
/**********************************************************************/
/*--------------------------------------------------------------------*/
/* Specify one DOMREC for each domain in the distributed network.     */
/* With the exception of the master domain (whose name is MASTERDM    */
/* and consist of the TWS for z/OS engine).                       */
/*--------------------------------------------------------------------*/
DOMREC   DOMAIN(DM100)              /* Domain name for 1st domain     */
         DOMMNGR(F100)              /* Chatham FTA - domain manager   */
         DOMPARENT(MASTERDM)        /* Domain parent is MASTERDM     */
DOMREC DOMAIN(DM200)                /* Domain name for 2nd domain     */
         DOMMNGR(F200)              /* Yarmouth FTA - domain manager */
         DOMPARENT(DM100)           /* Domain parent is DM100        */


The F101 fault-tolerant agent can be configured to be the backup domain
manager simply by specifying the Example 5-11 entries (in bold) in its CPUREC
definition.
Example 5-11 Configuring F101 to be the backup domain manager
CPUREC   CPUNAME(F101)
         CPUTCPIP(31758)
         CPUUSER(tws)
         CPUDOMAIN(DM100)
         CPUSERVER(1)
         CPUFULLSTAT(ON)                /* Full status on for Backup DM */
         CPURESDEP(ON)                  /* Resolve dep. on for Backup DM */


With CPUFULLSTAT (full status information) and CPURESDEP (resolve
dependency information) set to On, the Symphony file on F101 is updated with
the same reporting and logging information as the Symphony file on F100. The
backup domain manager will then be able to take over the responsibilities of the
first-level domain manager.




                       Chapter 5. End-to-end implementation scenarios and examples   307
Note: FixPack 04 introduces a new Fault-Tolerant Switch Feature, which is
                  described in a PDF file named FaultTolerantSwitch.README.

                  The new Fault-Tolerant Switch Feature replaces and enhances the existing or
                  traditional Fault-Tolerant Switch Manager for backup domain managers.




5.5.4 Switch to Tivoli Workload Scheduler backup domain manager
                This scenario is divided into two parts:
                    A short-term switch to the backup manager
                    By short-term switch, we mean that we have switched back to the original
                    domain manager before the current plan is extended or replanned.
                    A long-term switch
                    By a long-term switch, we mean that the switch to the backup manager will be
                    effective across the current plan extension or replan.

                Short-term switch to the backup manager
                In this scenario, we issue a switchmgr command on the F101 backup domain
                manager. We verify that the F101 takes over the responsibilities of the old
                first-level domain manager.

                The steps in the short-term switch scenario are:
                1. Issue the switch command on the F101 backup domain manager.
                2. Verify that the switch is done.

                Step 1. Issue switch command on F101 backup domain manager
                Before we do the switch, we check the status of the workstations from a JSC
                instance pointing to the first-level domain manager (Figure 5-8 on page 308).




                Figure 5-8 Status for workstations before the switch to F101




308   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Note in Figure 5-8 that F100 is MANAGER (in the CPU Type column) for the DM100
domain. F101 is FTA (in the CPU Type column) in the DM100 domain.

To simulate that the F100 first-level domain manager is down or unavailable due
to a system failure, we issue the switch manager command on the F101 backup
domain manager. The switch manager command is initiated from the conman
command line on F101:
   conman switchmgr "DM100;F101"

In this example, DM100 is the domain and F101 is the fault-tolerant workstation we
are going to switch to. The F101 fault-tolerant workstation responds with the
message shown in Example 5-12.
Example 5-12 Messages showing switch has been initiated
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for group 'TWS-EndToEnd'.
TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3)
Licensed Materials Property of IBM
5698-WKB
(C) Copyright IBM Corp 1998,2001
US Government User Restricted Rights
Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM
Corp.
Installed for group 'TWS-EndToEnd'.
Locale LANG set to "en_US"
 Schedule (Exp) 02/27/02 (#107) on F101. Batchman LIVES. Limit: 20, Fence: 0,
Audit Level: 0
switchmgr DM100;F101
AWS20710041I Service 2005 started on F101
AWS22020120 Switchmgr command executed from cpu F101 on cpu F101.


This indicates that the switch has been initiated.

It is also possible to initiate the switch from a JSC instance pointing to the F101
backup domain manager. Because we do not have a JSC instance pointing to the
backup domain manager we use the conman switchmgr command locally on the
F101 backup domain manager.




                       Chapter 5. End-to-end implementation scenarios and examples   309
For your information, we show how to initiate the switch from the JSC:
                1. Double-click Status of all Domains in the Default Plan Lists in the domain
                   manager JSC instance (TWSC-F100-Eastham) (Figure 5-9).




                Figure 5-9 Status of all Domains list

                2. Right-click the DM100 domain for the context menu shown in Figure 5-10.




                Figure 5-10 Context menu for the DM100 domain

                3. Select Switch Manager. The JSC shows a new pop-up window in which we
                   can search for the agent we will switch to (Figure 5-11).


310   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 5-11 The Switch Manager - Domain search pop-up window

4. Click the search button (the square box with three dots to the right of the F100
   domain shown in Figure 5-11), and JSC opens the Find Workstation Instance
   pop-up window (Figure 5-12 on page 311).




Figure 5-12 JSC Find Workstation Instance window

5. Click Start (Figure 5-12). JSC opens a new pop-up window that contains all
   the fault-tolerant workstations in the network (Figure 5-13 on page 312).


                       Chapter 5. End-to-end implementation scenarios and examples   311
6. If we specify a filter in the Find field (shown in Figure 5-12) this filter will be
                   used to narrow the list of workstations that are shown.




                Figure 5-13 The result from Find Workstation Instance

                7. Mark the workstation to switch to F101 (in our example) and click OK in the
                   Find Workstation Instance window (Figure 5-13).
                8. Click OK in the Switch Manager - Domain pop-up window to initiate the
                   switch. Note that the selected workstation (F101) appears in the pop-up
                   window (Figure 5-14).




312   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 5-14 Switch Manager - Domain pop-up window with selected FTA

The switch to F101 is initiated and Tivoli Workload Scheduler performs the
switch.

 Note: With Tivoli Workload Scheduler for z/OS 8.2, you can now switch the
 domain manager using the WSSTAT TSO command on the mainframe. The
 Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263
 guide incorrectly states the syntax of this command. DOC APAR PQ93442
 has been opened to correct the documentation.

If you prefer to work with the JSC, the above method of switching will appeal to
you. If you are a mainframe operator, you may prefer to perform this sort of task
from the mainframe. The example below shows how to do switching using the
WSSTAT TSO command instead.

In Example 5-13, the workstation F101 is instructed to become the new domain
manager of the DM100 domain. The command is sent via the TWST tracker
subsystem.
Example 5-13 Alternate method of switching domain manager, using WSSTAT command
WSSTAT SUBSYS(TWST) WSNAME(F101) MANAGES(DM100)


If you prefer to work with the UNIX or Windows command line, Example 5-14
shows how to run the switchmgr command from conman.
Example 5-14 Alternate method of switching domain manager, using switchmgr
conman ‘switchmgr DM100,F101’




                       Chapter 5. End-to-end implementation scenarios and examples   313
Step 2. Verify that the switch is done
                We check the status for the workstation using the JSC pointing to the old
                first-level domain manager, F100 (Figure 5-15).




                Figure 5-15 Status for the workstations after the switch to F101

                In Figure 5-15, it can be verified that F101 is now MANAGER (see CPU Type
                column) for the DM100 domain (the Domain column). The F100 is changed to an
                FTA (the CPU Type column).

                The OPCMASTER workstation has the status unlinked (as shown in the Link
                Status column in Figure 5-15 on page 314). This status is correct, as we are
                using the JSC instance pointing to the F100 workstation. The OPCMASTER has
                a linked status on F101, as expected.

                Switching to the backup domain manager takes some time, so be patient. The
                reason for this is that the switch manager command stops the backup domain
                manager and restarts it as the domain manager. All domain member
                fault-tolerant workstations are informed about the switch, and the old domain
                manager is converted to a fault-tolerant agent in the domain. The fault-tolerant
                workstations use the switch information to update their Symphony file with the
                name of the new domain manager. Then they stop and restart to link to the new
                domain manager.

                In rare occasions, the link status is not shown correctly in the JSC after a switch
                to the backup domain manager. If this happens, try to Link the workstation
                manually by right-clicking the workstation and clicking Link in the pop-up window.

                  Note: To reactivate F100 as the domain manager, simply do a switch manager
                  to F100 or run Symphony redistribute. The F100 will also be reinstated as the
                  domain manager when you run the extend or replan programs.


                Long-term switch to the backup manager
                The identification of domain managers is placed in the Symphony file. If a switch
                domain manager command is issued, the old domain manager name will be
                replaced with the new (backup) domain manager name in the Symphony file.



314   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
If the switch to the backup domain manager is going to be effective across Tivoli
Workload Scheduler for z/OS plan extension or replan, we have to update the
DOMREC definition. This is also the case if we redistribute the Symphony file
from Tivoli Workload Scheduler for z/OS.

The plan program reads the DOMREC definitions and creates a Symphony file
with domain managers and fault-tolerant agents accordingly. If the DOMREC
definitions are not updated to reflect the switch to the backup domain manager,
the old domain manager will automatically resume domain management
possibilities.

The steps in the long-term switch scenario are:
1. Issue the switch command on the F101 backup domain manager.
2. Verify that the switch is done.
3. Update the DOMREC definitions used by the TWSCE2E server and the Tivoli
   Workload Scheduler for z/OS plan programs.
4. Run the replan plan program in Tivoli Workload Scheduler for z/OS.
5. Verify that the switched F101 is still the domain manager.

Step 1. Issue switch command on F101 backup domain manager
The switch command is done as described in “Step 1. Issue switch command on
F101 backup domain manager” on page 308.

Step 2. Verify that the switch is done
We check the status of the workstation using the JSC pointing to the old first-level
domain manager, F100 (Figure 5-16).




Figure 5-16 Status of the workstations after the switch to F101

From Figure 5-16 it can be verified that F101 is now MANAGER (see CPU Type
column) for the DM100 domain (see the Domain column). F100 is changed to an
FTA (see the CPU Type column).

The OPCMASTER workstation has the status unlinked (see the Link Status
column in Figure 5-16). This status is correct, as we are using the JSC instance



                         Chapter 5. End-to-end implementation scenarios and examples   315
pointing to the F100 workstation. The OPCMASTER has a linked status on F101,
                as expected.

                Step 3. Update the DOMREC definitions for server and plan program
                We update the DOMREC definitions, so F101 will be the new first-level domain
                manager (Example 5-15).
                Example 5-15 DOMREC definitions
                /**********************************************************************/
                /* DOMREC: Defines the domains in the distributed Tivoli Workload     */
                /*         Scheduler network                                          */
                /**********************************************************************/
                /*--------------------------------------------------------------------*/
                /* Specify one DOMREC for each domain in the distributed network.     */
                /* With the exception of the master domain (whose name is MASTERDM    */
                /* and consist of the TWS for z/OS engine).                       */
                /*--------------------------------------------------------------------*/
                DOMREC   DOMAIN(DM100)              /* Domain name for 1st domain     */
                         DOMMNGR(F101)              /* Chatham FTA - domain manager   */
                         DOMPARENT(MASTERDM)        /* Domain parent is MASTERDM     */
                DOMREC DOMAIN(DM200)                /* Domain name for 2nd domain     */
                         DOMMNGR(F200)              /* Yarmouth FTA - domain manager */
                         DOMPARENT(DM100)           /* Domain parent is DM100        */


                The DOMREC DOMNGR(F101) defines the name of the first-level domain
                manager. This is the only change needed in the DOMREC definition.

                We did create an extra member in the EQQPARM data set and called it
                TPSWITCH. This member has the updated DOMREC definitions to be used
                when we have a long-term switch. In the EQQPARM data set, we have three
                members: TPSWITCH (F101 is domain manager), TPNORM (F100 is domain
                manager), and TPDOMAIN (the member used by TWSCE2E and the plan
                programs).

                Before the plan programs are executed, we replace the TPDOMAIN member with
                the TPSWITCH member. When F100 is going to be the domain manager again
                we simply replace the TPDOMAIN member with the TPNORM member.




316   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Tip: If you let your system automation (for example, System Automation/390)
 handle the switch to the backup domain manager, you can automate the entire
 process.
    System automation replaces the EQQPARM members.
    System automation initiates the switch manager command remotely on the
    fault-tolerant workstation.
    System automation resets the definitions when the original domain
    manager is ready be activated.

Step 4. Run replan plan program in Tivoli Workload Scheduler for
z/OS

We submit a replan plan program (job) using option 3.1 from legacy ISPF in the
Tivoli Workload Scheduler for z/OS engine and verify the output.

Example 5-16 shows the messages in EQQMLOG.
Example 5-16 EQQMLOG
EQQZ014I   MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000
EQQ3005I   CPU F101 IS SET AS DOMAIN MANAGER OF FIRST LEVEL
EQQ3030I   DOMAIN MANAGER F101 MUST HAVE SERVER ATTRIBUTE SET TO BLANK
EQQ3011I   CPU F200 SET AS DOMAIN MANAGER
EQQZ013I   NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER
EQQZ014I   MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER IS: 0000


The F101 fault-tolerant workstation is the first-level domain manager.

The EQQ3030I message is due to the CPUSERVER(1) specification in the
CPUREC definition for the F101 workstation. The CPUSERVER(1) specification
is used when F101 is running as a fault-tolerant workstation managed by the
F100 domain manager.

Step 5. Verify that the switched F101 is still domain manager
Finally, we verify that F101 is the domain manager after the replan program has
finished and the Symphony file is distributed (Figure 5-17).




                        Chapter 5. End-to-end implementation scenarios and examples   317
Figure 5-17 Workstations status after Tivoli Workload Scheduler for z/OS replan program

                From Figure 5-17, it can be verified that F101 is still MANAGER (in the CPU Type
                column) for the DM100 domain (in the Domain column). The CPU type for F100
                is FTA.

                The OPCMASTER workstation has the status unlinked (the Link Status column).
                This status is correct, as we are using the JSC instance pointing to the F100
                workstation. The OPCMASTER has a linked status on F101, as expected.

                  Note: To reactivate F100 as a domain manager, simply do a switch manager
                  to F100 or Symphony redistribute. The F100 will also be reinstated as domain
                  manager when you run the extend or replan programs.

                  Remember to change the DOMREC definitions before the plan programs are
                  executed or the Symphony file will be redistributed.


5.5.5 Implementing Tivoli Workload Scheduler high availability on
high availability environments
                You can also use high availability environments such as High Availability Cluster
                Multi-Processing (HACMP) or Microsoft Cluster (MSCS) to implement fail-safe
                Tivoli Workload Scheduler workstations.

                The redbook High Availability Scenarios with IBM Tivoli Workload Scheduler and
                IBM Tivoli Framework, SG24-6632, discusses these scenarios in detail, so we
                refer you to it for implementing Tivoli Workload Scheduler high availability using
                HACMP or MSCS.



5.6 Backup and maintenance guidelines for FTAs
                In this section, we discuss some important backup and maintenance guidelines
                for Tivoli Workload Scheduler fault-tolerant agents (workstations) in an
                end-to-end scheduling environment.



318   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
5.6.1 Backup of the Tivoli Workload Scheduler FTAs
           To make sure that you can recover from disk or system failures on the system
           where the Tivoli Workload Scheduler engine is installed, you should make a daily
           or weekly backup of the installed engine.

           The backup can be done in several ways. You probably already have some
           backup policies and routines implemented for the system where the Tivoli
           Workload Scheduler engine is installed. These backups should be extended to
           make a backup of files in the <TWShome> and the <TWShome/..> directories.

           We suggest that you have a backup of all of the Tivoli Workload Scheduler files in
           the <TWShome> and <TWShome/..> directories. If the Tivoli Workload
           Scheduler engine is running as a fault-tolerant workstation in an end-to-end
           network, it should be sufficient to make the backup on a weekly basis.

           When deciding how often a backup should be generated, consider:
              Are you using parameters on the Tivoli Workload Scheduler agent?
              If you are using parameters locally on the Tivoli Workload Scheduler agent
              and do not have a central repository for the parameters, you should consider
              making daily backups.
              Are you using specific security definitions on the Tivoli Workload Scheduler
              agent?
              If you are using specific security file definitions locally on the Tivoli Workload
              Scheduler agent and do not have a central repository for the security file
              definitions, you should consider making daily backups.

           Another approach is to make a backup of the Tivoli Workload Scheduler agent
           files, at least before making any changes to the files. For example, the changes
           can be updates to configuration parameters or a patch update of the Tivoli
           Workload Scheduler agent.


5.6.2 Stdlist files on Tivoli Workload Scheduler FTAs
           Tivoli Workload Scheduler fault-tolerant agents save job logs on the system
           where the jobs run. These job logs are stored in a directory named
           <twshome>/stdlist. In the stdlist (standard list) directory, there will be
           subdirectories with the name ccyy.mm.dd (where cc is the century, yy is the year,
           mm is the month, and dd is the date).

           This subdirectory is created daily by the Tivoli Workload Scheduler netman
           process when a new Symphony file (Sinfonia) is received on the fault-tolerant
           agent. The Symphony file is generated by the Tivoli Workload Scheduler for z/OS
           controller plan program in the end-to-end scheduling environment.


                                   Chapter 5. End-to-end implementation scenarios and examples   319
The ccyy.mm.dd subdirectory contains a job log for each job that is executed on a
                particular production day, as seen in Example 5-17.
                Example 5-17 Files in a stdlist/ccyy.mm.dd directory
                O19502.0908        File with job log for job with process no. 19502 run at 09.08
                O19538.1052        File with job log for job with process no. 19538 run at 10.52
                O38380.1201        File with job log for job with process no. 38380 run at 12.01


                These log files are created by the Tivoli Workload Scheduler job manager
                process (jobman) and will remain there until deleted by the system administrator.

                Tivoli Workload Scheduler also logs messages from its own programs. These
                messages are stored in a subdirectory of the stdlist directory called logs.

                The easiest way to maintain the growth of these directories is to decide how long
                the log files are needed and schedule a job under Tivoli Workload Scheduler for
                z/OS control, which removes any file older than the given number of days. The
                Tivoli Workload Scheduler rmstdlist command can perform the deletion of
                stdlist files and remove or display files in the stdlist directory based on age of the
                files:
                     rmstdlist [-v |-u]
                     rmstdlist [-p] [age]

                In these commands, the arguments are:
                -v       Displays the command version and exits.
                -u       Displays the command usage information and exits.
                -p       Displays the names qualifying standard list file directories. No directory or
                         files are removed. If you do not specify -p, the qualifying standard list files
                         are removed.
                age      The minimum age, in days, for standard list file directories to be displayed
                         or removed. The default is 10 days.

                We suggest that you run the rmstdlist command daily on all of your
                fault-tolerant agents. This command can be defined in a job in a job stream and
                scheduled by Tivoli Workload Scheduler for z/OS. You may need to save a
                backup copy of the stdlist files, for example, for internal revision or due to
                company policies. If this is the case, a backup job can be scheduled to run just
                before the rmstdlist job.




320   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
5.6.3 Auditing log files on Tivoli Workload Scheduler FTAs
           The auditing function can be used to track changes to the Tivoli Workload
           Scheduler plan (the Symphony file) on FTAs. Plan auditing is enabled by the
           TOPLOGY PLANAUDITLEVEL parameter, described below.
              PLANAUDITLEVEL(0|1)

           Enables or disables plan auditing for distributed agents. Valid values are 0 to
           disable plan auditing and 1 to activate plan auditing. Auditing information is
           logged to a flat file in the TWShome/audit/plan directory. Each Tivoli Workload
           Scheduler workstation maintains its own log. Only actions are logged in the
           auditing file, not the success or failure of any action. If you change the value, you
           also need to restart the Tivoli Workload Scheduler for z/OS server and renew the
           Symphony file.

           After plan auditing has been enabled, modification to the Tivoli Workload
           Scheduler plan (the Symphony) on an FTA will be added to the plan directory on
           that workstation:
              <TWShome>/audit/plan/date (where date is in ccyymmdd format)

           We suggest that you clean out the audit database and plan directories regularly,
           daily if necessary. The cleanout in the directories can be defined in a job in a job
           stream and scheduled by Tivoli Workload Scheduler for z/OS. You may need to
           save a backup copy of the audit files (for internal revision or due to company
           policies, for example). If so, a backup job can be scheduled to run just before the
           cleanup job.


5.6.4 Monitoring file systems on Tivoli Workload Scheduler FTAs
           It is easier to deal with file system problems before they happen. If your file
           system fills up, Tivoli Workload Scheduler will no longer function and your job
           processing will stop. To avoid problems, monitor the file systems containing your
           Tivoli Workload Scheduler home directory and /tmp. For example, if you have a
           2 GB file system, you might want a warning at 80%, but if you have a smaller file
           system, you will need a warning when a lower percentage fills up. We cannot
           give you an exact percentage at which to be warned. This depends on many
           variables that change from installation to installation (or company to company).

           Monitoring or testing for the percentage of the file system can be done by, for
           example, IBM Tivoli Monitoring and IBM Tivoli Enterprise Console® (TEC).

           Example 5-18 shows an example of a shell script that tests for the percentage of
           the Tivoli Workload Scheduler file system filled and report back if it is over 80%.




                                    Chapter 5. End-to-end implementation scenarios and examples   321
Example 5-18 Monitoring script
                /usr/bin/df -P /dev/lv01 | grep TWS >> tmp1$$
                /usr/bin/awk '{print $5}' tmp1$$ > tmp2$$
                /usr/bin/sed 's/%$//g' tmp2$$ > tmp3$$
                x=`cat tmp3$$`
                i=`expr $x > 80`
                echo "This file system is less than 80% full." >> tmp4$$
                if [ "$i" -eq 1 ]; then
                        echo "This file system is over 80% full. You need to
                              remove schedule logs and audit logs from the
                              subdirectories in the file system." > tmp4$$
                        fi
                cat tmp4$$
                rm tmp1$$ tmp2$$ tmp3$$ tmp4$$



5.6.5 Central repositories for important Tivoli Workload Scheduler files
                Tivoli Workload Scheduler has several files that are important for use of Tivoli
                Workload Scheduler and for the daily Tivoli Workload Scheduler production
                workload if you are running a Tivoli Workload Scheduler master domain manager
                or a Tivoli Workload Scheduler for z/OS end-to-end server. Managing these files
                across several Tivoli Workload Scheduler workstations can be a cumbersome
                and very time-consuming task. Using central repositories for these files can save
                time and make your management more effective.

                Script files
                Scripts (or the JCL) are very important objects when doing job scheduling on the
                Tivoli Workload Scheduler fault-tolerant agents. It is the scripts that actually
                perform the work or the job on the agent system, such as updating the payroll
                database or the customer inventory database.

                The job defintion for distributed jobs in Tivoli Workload Scheduler or Tivoli
                Workload Scheduler for z/OS contains a pointer (the path or directory) to the
                script. The script by itself is placed locally on the fault-tolerant agent. Because
                the fault-tolerant agents have a local copy of the plan (Symphony) and the script
                to run, they can continue running jobs on the system even if the connection to the
                Tivoli Workload Scheduler master or the Tivoli Workload Scheduler for z/OS
                controller is broken. This way we have fault tolerance on the workstations.

                Managing scripts on several Tivoli Workload Scheduler fault-tolerant agents and
                making sure that you always have the correct versions on every fault-tolerant
                agent can be a time-consuming task. You also must ensure that the scripts are
                protected so that they cannot be updated by the wrong person. Pure protected
                scripts can cause problems in your production environment if someone has



322   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
changed something without notifying the responsible planner or change
         manager.

         We suggest placing all scripts that are used for production workload in one
         common script repository. The repository can be designed in different ways. One
         way could be to have a subdirectory for each fault-tolerant workstation (with the
         same name as the name on the Tivoli Workload Scheduler workstation).

         All changes to scripts are made in this production repository. On a daily basis, for
         example, just before the plan is extended, the master scripts in the central
         repository are distributed to the fault-tolerant agents. The daily distribution can be
         handled by a Tivoli Workload Scheduler scheduled job. This job can be defined
         as predecessor to the plan extend job.

         This approach can be made even more advanced by using a software distribution
         application to handle the distribution of the scripts. The software distribution
         application can help keep track of different versions of the same script. If you
         encounter a problem with a changed script in a production shift, you can simply
         ask the software distribution application to redistribute a previous version of the
         same script and then rerun the job.

         Security files
         The Tivoli Workload Scheduler security file, discussed in detail 5.7, “Security on
         fault-tolerant agents” on page 323, is used to protect access to Tivoli Workload
         Scheduler database and plan objects. On every Tivoli Workload Scheduler
         engine (such as domain manager and fault-tolerant agent), you can issue
         conman commands for the plan and composer commands for the database.
         Tivoli Workload Scheduler security files are used to ensure that the right people
         have the right access to objects in Tivoli Workload Scheduler.

         Security files can be created or modified on every local Tivoli Workload
         Scheduler workstation, and they can be different from workstation to workstation.

         We suggest having a common security strategy for all Tivoli Workload Scheduler
         workstations in your IBM Tivoli Workload Scheduler network (and end-to-end
         network). This way, the security file can be placed centrally and changes are
         made only in the central security file. If the security file has been changed, it is
         simply distributed to all Tivoli Workload Scheduler workstations in your IBM Tivoli
         Workload Scheduler network.



5.7 Security on fault-tolerant agents
         In this section, we offer an overview of how security is implemented on Tivoli
         Workload Scheduler fault-tolerant agents (including domain managers). For


                                 Chapter 5. End-to-end implementation scenarios and examples   323
more details, see the IBM Tivoli Workload Scheduler Planning and Installation
                    Guide, SC32-1273.

                    Figure 5-18 shows the security model on Tivoli Workload Scheduler fault-tolerant
                    agents. When a user attempts to display a list of defined jobs, submit a new job
                    stream, add a new resource, or any other operation related to the Tivoli Workload
                    Scheduler plan or databases, Tivoli Workload Scheduler performs a check to
                    verify that the user is authorized to perform that action.



  Security Model
   TWS and root users:
   TWS and root users:
   Has full access to all areas
                                                  1                                Operations Group:
                                                                                    Operations Group:
   Has full access to all areas                                                  Can manage the whole
                                                                                 Can manage the whole
                                                 Applications Manager:
                                                  Applications Manager:            workload but cannot
                                                                                    workload but cannot
        Application User:
        Application User:                   3     Can document jobs and
                                                  Can document jobs and              create job streams
                                                                                      create job streams
        Can document own
        Can document own                       schedules for entire group
                                                schedules for entire group          Has no root access
                                                                                     Has no root access
        jobs and schedules
         jobs and schedules                and manage some production
                                            and manage some production

                  4               5     General User:
                                        General User:
                               Has display access only
                               Has display access only                                       2




  Security File
      USER Root                                         1   USER AppManager                           3
        CPU=@+LOGON=maestro,TWS,root,Root_mars-region         CPU=@+LOGON=appmgr,AppMgrs
      BEGIN                                                 BEGIN
        JOB CPU=@ ACCESS=@                                  ...
        ...                                                 END
      END
                                                            USER Application                          4
      USER Operations                                   2     CPU=@+LOGON=apps,Application
        CPU=@+LOGON=op,Operator                             BEGIN
      BEGIN                                                 ...
        JOB CPU=@ ACCESS=DISPLAY,SUBMIT,KILL,CANCEL         END
        ...
      END
                                                            USER User                                 5
                                                              CPU=@+LOGON=users,Users
                                                            BEGIN
                                                            ...
                                                            END


Figure 5-18 Sample security setup

                    Tivoli Workload Scheduler users have different roles within the organization. The
                    Tivoli Workload Scheduler security model you implement should reflect these
                    roles. You can think of the different groups of users as nested boxes, as in
                    Figure 5-18 on page 324. The largest box represents the highest access, granted
                    to only the Tivoli Workload Scheduler user and the root user. The smaller boxes
                    represent more restricted roles, with correspondingly restricted access. Each



324     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
group that is represented by a box in the figure would have a corresponding
            stanza in the security file. Tivoli Workload Scheduler programs and commands
            read the security file to determine whether the user has the access that is
            required to perform an action.


5.7.1 The security file
            Each workstation in a Tivoli Workload Scheduler network has its own security
            file. These files can be maintained independently on each workstation, or you
            can keep a single centralized security file on the master and copy it periodically
            to the other workstations in the network.

            At installation time, a default security file is created that allows unrestricted
            access to only the Tivoli Workload Scheduler user (and, on UNIX workstations,
            the root user). If the security file is accidentally deleted, the root user can
            generate a new one.

            If you have one security file for a network of agents, you may wish to make a
            distinction between the root user on a fault-tolerant agent and the root user on
            the master domain manager. For example, you can restrict local users to
            performing operations that affect only the local workstation, while permitting the
            master root user to perform operations that affect any workstation in the network.

            A template file named TWShome/config/Security is provided with the software.
            During installation, a copy of the template is installed as TWShome/Security, and
            a compiled copy is installed as TWShome/../unison/Security.

            Security file stanzas
            The security file is divided into one or more stanzas. Each stanza limits access at
            three different levels:
               User attributes appear between the USER and BEGIN statements and
               determine whether a stanza applies to the user attempting to perform an
               action.
               Object attributes are listed, one object per line, between the BEGIN and END
               statements. Object attributes determine whether an object line in the stanza
               matches the object the user is attempting to access.
               Access rights appear to the right of each object listed, after the ACCESS
               statement. Access rights are the specific actions that the user is allowed to
               take on the object.




                                   Chapter 5. End-to-end implementation scenarios and examples   325
Important: Because only a subset of conman commands is available on FTAs
                  in an end-to-end environment, some of the ACCESS rights that would be
                  applicable in an ordinary non-end-to-end IBM Tivoli Workload Scheduler
                  network will not be applicable in an end-to-end network.


                The steps of a security check
                The steps of a security check reflect the three levels listed above:
                1. Identify the user who is attempting to perform an action.
                2. Determine the type of object being accessed.
                3. Determine whether the requested access should be granted to that object.

                Step 1: Identify the user
                When a user attempts to perform any Tivoli Workload Scheduler action, the
                security file is searched from top to bottom to find a stanza whose user attributes
                match the user attempting to perform the action. If no match is found in the first
                stanza, the user attributes of the next stanza are searched. If a stanza is found
                whose user attributes match that user, that stanza is selected for the next part of
                the security check. If no stanza in the security file has user attributes that match
                the user, access is denied.

                Step 2: Determine the type of object being accessed
                After the user has been identified, the stanza that applies to that user is
                searched, top-down, for an object attribute that matches the type of object the
                user is trying to access. Only that particular stanza (between the BEGIN and
                END statements) is searched for a matching object attribute. If no matching
                object attribute is found, access is denied.

                Step 3: Determine whether access is granted to that object
                If an object attribute is located that corresponds to the object that the user is
                attempting to access, the access rights following the ACCESS statement on that
                line in the file are searched for the action that the user is attempting to perform. If
                this access right is found, then access is granted. If the access right is not found
                on this line, then the rest of the stanza is searched for other object attributes
                (other lines) of the same type, and this step is repeated for each of these.

                Figure 5-19 on page 327 illustrates the steps of the security check algorithm.




326   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Master
                                                                  Sol
                                                                                   Security

    Logon ID:
    johns
                                                         FTA                     FTA
                                                        Venus                    Mars
    Command issued:
    conman 'release mars#weekly.cleanup'




 1) Find user                                     Security file on Sol
                   USER JohnSmith
                      CPU=@+LOGON=johns
                   BEGIN
 2) Find object       JOB CPU=@ NAME=C@               ACCESS=DISPLAY,RELEASE,ADD,…
                      JOB CPU=@ NAME=@                ACCESS=DISPLAY
 3) Find access right SCHEDULE   CPU=@                ACCESS=DISPLAY,CANCEL,ADD,…
                      RESOURCE   CPU=@                ACCESS=DISPLAY,MODIFY,ADD,…
                      PROMPT                          ACCESS=DISPLAY,ADD,REPLY,…
                      CALENDAR                        ACCESS=DISPLAY
                      CPU                             ACCESS=DISPLAY
                   END

Figure 5-19 Example of a Tivoli Workload Scheduler security check


5.7.2 Sample security file
                 Here are some things to note about the security file stanza (Example 5-19 on
                 page 328):
                    mastersm is an arbitrarily chosen name for this group of users.
                    The example security stanza above would match a user that logs on to the
                    master (or to the Framework via JSC), where the user name (or TMF
                    Administrator name) is maestro, root, or Root_london-region.
                    These users have full access to jobs, jobs streams, resources, prompts, files,
                    calendars, and workstations.
                    The users have full access to all parameters except those whose names
                    begin with r (parameter name=@ ~ name=r@ access=@).



                                         Chapter 5. End-to-end implementation scenarios and examples   327
For NT user definitions (userobj), the users have full access to objects on all
                workstations in the network.
                Example 5-19 Sample security file
                ###########################################################
                #Sample Security File
                ###########################################################
                #(1)APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON THE
                #MASTER DOMAIN MANAGER OR FRAMEWORK.
                user mastersm cpu=$master,$framework +logon=maestro,root,Root_london-region
                begin
                #OBJECT ATTRIBUTES ACCESS CAPABILITIES
                #--------------------------------------------
                job                        access=@
                schedule                   access=@
                resource                   access=@
                prompt                     access=@
                file                       access=@
                calendar                   access=@
                cpu                        access=@
                parameter name=@ ~ name=r@ access=@
                userobj cpu=@ + logon=@ access=@
                end


                Creating the security file
                To create user definitions, edit the template file TWShome/Security. Do not
                modify the original template in TWShome/config/Security. Then use the makesec
                command to compile and install a new operational security file. After it is
                installed, you can make further modifications by creating an editable copy of the
                operational file with the dumpsec command.

                The dumpsec command
                The dumpsec command takes the security file, generates a text version of it, and
                sends that to stdout. The user must have display access to the security file.
                    Synopsis:
                        dumpsec -v | -u
                        dumpsec > security-file
                    Description:
                    If no arguments are specified, the operational security file (../unison/Security)
                    is dumped. To create an editable copy of a security file, redirect the output of
                    the command to another file, as shown in “Example of dumpsec and
                    makesec” on page 330.




328   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Arguments
   – -v displays command version information only.
   – -u displays command usage information only.
   – security-file specifies the name of the security file to dump.




Figure 5-20 The dumpsec command

The makesec command
The makesec command essentially does the opposite of what the dumpsec
command does. The makesec command takes a text security file, checks its
syntax, compiles it into a binary security file, and installs the new binary file as
the active security file. Changes to the security file take effect when Tivoli
Workload Scheduler is stopped and restarted. Affected programs are:
   Conman
   Composer
   Tivoli Workload Scheduler connectors

Simply exit the programs. The next time they are run, the new security definitions
will be recognized. Tivoli Workload Scheduler connectors must be stopped using
the wmaeutil command before changes to the security file will take effect for
users of JSC. The connectors will automatically restart as needed.

The user must have modify access to the security file.

 Note: On Windows NT, the connector processes must be stopped (using the
 wmaeutil command) before the makesec command will work correctly.

   Synopsis:
       makesec -v | -u
       makesec [-verify] in-file




                        Chapter 5. End-to-end implementation scenarios and examples   329
Description:
                    The makesec command compiles the specified file and installs it as the
                    operational security file (../unison/Security). If the -verify argument is
                    specified, the file is checked for correct syntax, but it is not compiled and
                    installed.
                    Arguments:
                    – -v displays command version information only.
                    – -u displays command usage information only.
                    – -verify checks the syntax of the user definitions in the in-file only. The file
                      is not installed as the security file. (Syntax checking is performed
                      automatically when the security file is installed.)
                    – in-file specifies the name of a file or set of files containing user
                      definitions. A file name expansion pattern is permitted.

                Example of dumpsec and makesec
                Example 5-20 creates an editable copy of the active security file in a file named
                Security.conf, modifies the user definitions with a text editor, then compiles
                Security.conf and replaces the active security file.
                Example 5-20 Using dumpsec and makesec
                dumpsec > Security.conf
                vi Security.conf
                (Here you would make any required modifications to the Security.conf file)
                makesec Security.conf


                  Note: Add the Tivoli Administrator to the Tivoli Workload Scheduler security
                  file after you have installed the Tivoli Management Framework and Tivoli
                  Workload Scheduler connector.

                Configuring Tivoli Workload Scheduler security for the Tivoli
                Administrator

                In order to use the Job Scheduling Console on a master or on an FTA, the Tivoli
                Administrator user (or users) must be defined in the security file of that master or
                FTA. The $framework variable can be used as a user attribute in place of a
                specific workstation. This indicates a user logging in via the Job Scheduling
                Console.




330   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
5.8 End-to-end scheduling tips and tricks
           In this section, we provide some tips, tricks, and troubleshooting suggestions for
           the end-to-end scheduling environment.


5.8.1 File dependencies in the end-to-end environment
           Use the filewatch.sh program that is delivered with Tivoli Workload Scheduler.
           Description of usage and parameters is first in the filewatch.sh program.

           In an ordinary (non-end-to-end) IBM Tivoli Workload Scheduler network — one
           in which the MDM is a UNIX or Windows workstation — it is possible to create a
           file dependency on a job or job stream; this is not possible in an end-to-end
           network because the controlling system is Tivoli Workload Scheduler for z/OS.

           It is very common to use files as triggers or predecessors to job flows in
           distributed environment.

           Tivoli Workload Scheduler 8.2 includes TWSHOME/bin/filewatch.sh, a sample
           script that can be used to check for the existence of files. You can configure the
           script to check periodically for the file, just as with a real Tivoli Workload
           Scheduler file dependency. By defining a job that runs filewatch.sh, you can
           implement a file dependency.

           To learn more about filewatch and how to use it, read the detailed description in
           the comments at the top of the filewatch.sh script. The options of the script are:
           -kd (mandatory)        The options to pass to the test command. See the man
                                  page for “test” for a list of allowed values.
           -fl (mandatory)        Path name of the file (or directory) to look for.
           -dl (mandatory)        The deadline period (in seconds); cannot be used
                                  together with -nd.
           -nd (mandatory)        Suppress the deadline; cannot be used together with -dl.
           -int (mandatory)       The search interval period (in seconds).
           -rc (optional)         The return code that the script will exit with if the deadline
                                  is reached without finding the file (ignored if -nd is used).
           -tsk (optional)        The path of the task launched if the file is found.

           Here are two filewatch examples:
              In this example, the script checks for file /tmp/filew01 every 15 seconds
              indefinitely:
              JOBSCR(‘/tws/bin/filewatch.sh -kd f -fl /tmp/filew01 -int 15 -nd')



                                  Chapter 5. End-to-end implementation scenarios and examples   331
In this example, the script checks for file /tmp/filew02 every 15 seconds for 60
                      seconds. If file is not there 60 seconds after the check has started, the script
                      will end with return code 12.
                      JOBSCR('/tws/bin/filewatch.sh -kd f -fl /tmp/filew02 -int 15 -dl 60 -rc 12')

                  Figure 5-21 shows how the filewatch script might be used as a predecessor to
                  the job that will process or work with the file being “watched.” This way you can
                  make sure that the file to be processed is there before running the job that will
                  process the file.



                                        The job that processes a file can be dependent
                                          on a filewatch job that watches for the file.




      Filewatch log




Figure 5-21 How to use filewatch.sh to set up a file dependency


5.8.2 Handling offline or unlinked workstations

                    Tip: If the workstation does not link as it should, the cause can be that the
                    writer process has not initiated correctly or the run number for the Symphony
                    file on the fault-tolerant workstation is not the same as the run number on the
                    master. If you mark the unlinked workstations and right-click, a pop-up menu
                    opens as shown in Figure 5-22 on page 333. Click Link to try to link the
                    workstation.




332     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Figure 5-22 Context menu for workstation linking

You can check the Symphony run number and the Symphony status in the legacy
ISPF using option 6.6.

 Tip: If the workstation is Not Available/Offline, the cause might be that the
 mailman, batchman, and jobman processes are not started on the
 fault-tolerant workstation. You can right-click the workstation to open the
 context menu shown in Figure 5-22, then click Set Status. This opens a new
 window (Figure 5-23), in which you can try to activate the workstation by
 clicking Active. This action attempts to start the mailman, batchman, and
 jobman processes on the fault-tolerant workstation by issuing a conman start
 command on the agent.




Figure 5-23 Pop-up window to set status of workstation




                        Chapter 5. End-to-end implementation scenarios and examples   333
5.8.3 Using dummy jobs
                Because it is not possible to add dependency to the job stream level in Tivoli
                Workload Scheduler for z/OS (as it is in the IBM Tivoli Workload Scheduler
                distributed product), dummy start and dummy end general jobs are a workaround
                for this Tivoli Workload Scheduler for z/OS limitation. When using dummy start
                and dummy end general jobs, you can always uniquely identify the start point
                and the end point for the jobs in the job stream.


5.8.4 Placing job scripts in the same directories on FTAs
                The SCRPTLIB members can be reused in several job streams and on different
                fault-tolerant workstations of the same type (such as UNIX or Windows). For
                example, if a job (script) is scheduled on all of your UNIX systems, you can
                create one SCRTPLIB member for this job and define it in several job streams on
                the associated fault-tolerant workstations, though this requires that the script is
                placed in the same directory on all of your systems. This is another good reason
                to have all job scripts placed in the same directories across your systems.


5.8.5 Common errors for jobs on fault-tolerant workstations
                This sections discusses two of the most common errors for jobs on fault-tolerant
                workstations.

                Handling errors in script definitions
                When adding a job stream to the current plan in Tivoli Workload Scheduler for
                z/OS (using JSC or option 5.1 from legacy ISPF), you may see this error
                message:
                    EQQM071E A JOB definition referenced by this occurrence is wrong

                This shows that there is an error in the definition for one or more jobs in the job
                stream and that the job stream is not added to the current plan. If you look in the
                EQQMLOG for the Tivoli Workload Scheduler for z/OS engine, you will find
                messages similar to Example 5-21.
                Example 5-21 EQQMLOG messages
                EQQM992E WRONG JOB DEFINITION FOR THE FOLLOWING OCCURRENCE:
                EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED
                EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED


                In our example, the F100J011 member in EQQSCLIB looks like:
                    JOBRC JOBSCR('/tivoli/TWS/scripts/japjob1') JOBUSR(tws-e)




334   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Note the typo error: JOBRC should be JOBREC. The solution to this problem is
simply to correct the error and try to add the job stream again. The job stream
must be added to the Tivoli Workload Scheduler for z/OS plan again, because
the job stream was not added the first time (due to the typo).

 Note: You will get similar error messages in the EQQMLOG for the plan
 programs if the job stream is added during plan extension. The error
 messages that are issued by the plan program are:
     EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED
     EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED
     EQQ3077W BAD MEMBER F100J011 CONTENTS IN EQQSCLIB

 Note that the plan extension program will end with return code 0.

If an FTA job is defined in Tivoli Workload Scheduler for z/OS but the
corresponding JOBREC is missing, the job will be added to the Symphony file
but it will be set to priority 0 and state FAIL. This combination of priority and state
is not likely to occur normally, so if you see a job like this, you can assume that
the problem is that the JOBREC was missing when the Symphony file was built.

Another common error is a misspelled name for the script or the user (in the
JOBREC, JOBSCR, or JOBUSR definition) in the FTW job.

Say we have the JOBREC definition in Example 5-22.
Example 5-22 Typo in JOBREC
/* Definiton for F100J010 job to be executed on F100 machine                   */
/*                                                                             */
JOBREC JOBSCR('/tivoli/TWS/scripts/jabjob1') JOBUSR(tws-e)


Here the typo error is in the name of the script. It should be japjob1 instead of
jabjob1. This typo will result in an error with the error code FAIL when the job is
run. The error will not be caught by the plan programs or when you add the job
stream to the plan in Tivoli Workload Scheduler for z/OS.

It is easy to correct this error using the following steps:
1. Correct the typo in the member in the SCRPTLIB.
2. Add the same job stream again to the plan in Tivoli Workload Scheduler for
   z/OS.

This way of handling typo errors in the JOBREC definitions is actually the same
as if you performed a rerun from on a Tivoli Workload Scheduler master. The job
stream must be re-added to the Tivoli Workload Scheduler for z/OS plan to have



                        Chapter 5. End-to-end implementation scenarios and examples   335
Tivoli Workload Scheduler for z/OS send the new JOBREC definition to the
                fault-tolerant workstation agent. Remember, when doing extend or replan of the
                Tivoli Workload Scheduler for z/OS plan, that the JOBREC definition is built into
                the Symphony file. By re-adding the job stream we ask Tivoli Workload
                Scheduler for z/OS to send the re-added job stream, including the new JOBREC
                definition, to the agent.

                Handling the wrong password definition for Windows FTW
                If you have defined the wrong password for a Windows user ID in the USRREC
                topology definition or if the password has been changed on the Windows
                machine, the FTW job will end with an error and the error code FAIL.

                To solve this problem, you have two options:
                    Change the wrong USRREC definition and redistribute the Symphony file
                    (using option 3.5 from legacy ISPF).
                    This approach can be disruptive if you are running a huge batch load on
                    FTWs and are in the middle of a batch peak.
                    Log into the first-level domain manager (the domain manager directly
                    connected to the Tivoli Workload Scheduler for z/OS server; if there is more
                    than one first-level domain manager, log on the first-level domain manager
                    that is in the hierarchy of the FTW), then alter the password either using
                    conman or using a JSC instance pointing to the first-level domain manager.
                    When you have changed the password, simply rerun the job that was in error.
                    The USRREC definition should still be corrected so it will take effect the next
                    time the Symphony file is created.


5.8.6 Problems with port numbers
                There are two different parameters named PORTNUMBER, one in the
                SERVOPTS that is used for the JSC and OPC Connector, and one in the
                TOPOLOGY parameters that is used by the E2E Server to communicate with the
                distributed FTAs.

                The two PORTNUMBER parameters must have different values. The localopts
                for the FTA has a parameter named nm port, which is the port on which netman
                listens. The nm port value must match the CPUREC CPUTCPIP value for each
                FTA. There is no requirement that CPUTCPIP must match the TOPOLOGY
                PORTNUMBER. The value of the TOPOLOGY PORTNUMBER and the
                HOSTNAME value are imbedded in the Symphony file, which enables the FTA to
                know how to communicate back to OPCMASTER. The next sections illustrate
                different ways in which setting the values for PORTNUMBER and CPUTCPIP
                incorrectly can cause problems in the E2E environment.



336   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
CPUTCPIP not the same as NM PORT
The value for CPUTCPIP in the CPUREC parameter for an FTA should always be
set to the same port that the FTA has defined as nm port in localopts.

We did some tests to see what errors occur if the wrong value is used for
CPUTCPIP. In the first test, nm port for the domain manager (DM) HR82 was
31111 but CPUTCPIP was set to 31122, a value that was not used by any FTA
on our network. The current plan (CP) was extended to distribute a Symphony
file with the wrong CPUTCPIP in place. The DM failed to link and the messages
in Example 5-23 were seen in the USS stdlist TWSMERGE log.

Example 5-23 Excerpt from TWSMERGE log
MAILMAN:+ AWSBCV082I Cpu HR82, Message: AWSDEB003I Writing socket: EDC8128I
Connection refused.
MAILMAN:+ AWSBCV035W WARNING: Linking to HR82 failed, will write to POBOX.


Therefore, if the DM will not link and the messages shown above are seen in
TWSMERGE, the nm port value should be checked and compared to the
CPUTCPIP value. In this case, correcting the CPUTCPIP value and running a
Symphony Renew job eliminated the problem.

We did another test with the same DM, this time setting CPUTCPIP to 31113.

Example 5-24 Setting CPUTCPIP to 31113
CPUREC CPUNAME(HR82)
   CPUTCPIP(31113)


The TOPOLOGY PORTNUMBER was also set to 31113, its normal value:
      TOPOLOGY PORTNUMBER(31113)

After cycling the E2E Server and running a CP EXTEND, the DM and all FTAs
were LINKED and ACTIVE, which was not expected (Example 5-25).

Example 5-25 Messages showing DM and all the FTAs are LINKED and ACTIVE
EQQMWSLL -------- MODIFYING WORK STATIONS IN THE CURRENT PLAN Row 1 to 8 of 8

Enter the row command S to select a work station for modification, or
I to browse system information for the destination.

Row   Work   station                    L S T R Completed     Active Remaining
cmd   name   text                               oper dur.     oper   oper dur.
'     HR82   PDM on HORRIBLE            L A C A     4    0.00      0    13     0.05
'     OP82   MVS XAGENT on HORRIBLE     L A C A     0    0.00      0     0     0.00
'     R3X1   SAP XAGENT on HORRIBLE     L A C A     0    0.00      0     0     0



                          Chapter 5. End-to-end implementation scenarios and examples   337
How could the DM be ACTIVE if the CPUTCPIP value was intentionally set to the
                wrong value? We found that there was an FTA on the network that was set up
                with nm port=31113. It was actually an MDM (master domain manager) for a
                Tivoli Workload Scheduler 8.1 distributed-only (not E2E) environment, so our
                Version 8.2 E2E environment connected to the Version 8.1 MDM as if it were
                HR82. This illustrates that extreme care must be taken to code the CPUTCPIP
                values correctly, especially if there are multiple Tivoli Workload Scheduler
                environments present (for example, a test system and a production system).

                The localopts nm ipvalidate parameter could be used to prevent the overwrite of
                the Symphony file due to incorrect parameters being set up. If the following is
                specified in localopts:
                        nm ipvalidate=full

                The connection would not be allowed if IP validation fails. However, if SSL is
                active, the recommendation is to use the following localopts parameter:
                        nm ipvalidate=none


                PORTNUMBER set to PORT reserved for another task
                We wanted to test the effect of setting the TOPOLOGY PORTNUMBER
                parameter to a port that is reserved for use by another task. The data set
                specified by the PROFILE DD statement in the TCPIP statement had the
                parameters in Example 5-26.

                Example 5-26 TOPOLOGY PORTNUMBER parameter
                PORT
                   3000 TCP CICSTCP                    ; CICS Socket


                After setting PORTNUMBER in TOPOLOGY to 3000 and running a CP EXTEND
                to create a new Symphony file, there were no obvious indications in the
                messages that there was a problem with the PORTNUMBER setting. However,
                the following messages appeared in the NETMAN log in USS stdlist/logs:
                    NETMAN:Listening on 3000 timeout 10 started Sun Aug   1 21:01:57 2004

                These messages then occurred repeatedly in the NETMAN log (Example 5-27).

                Example 5-27 Excerpt from the NETMAN log
                NETMAN:+ AWSEDW020E Error opening IPC:
                NETMAN:AWSDEB001I Getting a new socket: 7




338   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
If these messages are seen and the DM will not link, the following command can
be issued to determine that the problem is a reserved TCP/IP port:
      TSO NETSTAT PORTLIST

In Example 5-28, the output shows the values for the PORTNUMBER port
(3000).

Example 5-28 Excerpt from the NETMAN log
EZZ2350I   MVS TCP/IP   NETSTAT CS V1R5         TCPIP Name: TCPIP
EZZ2795I   Port# Prot   User     Flags      Range       IP Address
EZZ2796I   ----- ----   ----     -----      -----       ----------
EZZ2797I   03000 TCP    CICSTCP DA


PORTNUMBER set to PORT already in use
PORTNUMBER in TOPOLOGY was set to 424, which was already in use as the
TCPIPPORT by the controller. Everything worked correctly, but when the E2E
Server was shut down, the message in Example 5-29 occurred in the controller
EQQMLOG every 10 minutes.

Example 5-29 Excerpt from the controller EQQMLOG
08/01 18.48.49 EQQTT11E AN UNDEFINED TRACKER AT IP ADDRESS 9.48.204.143
ATTEMPTED TO CONNECT TO THE
08/01 18.48.49 EQQTT11I CONTROLLER. THE REQUEST IS NOT ACCEPTED
EQQMA11E Cannot allocate connection
EQQMA17E TCP/IP socket I/O error during Connect() call for
"SocketImpl<Binding=/192.227.118.43,port=31111,localport=32799>", failed
with error: 146=Connection refused


When the E2E Server was up, it handled port 424. When the E2E Server was
down, port 424 was handled by the controller task (which still had TCPIPPORT
set to the default value of 424). Because there were some TCP/IP connected
trackers defined on that system, message EQQTT11E was issued when the FTA
IP addresses did not match the TCP/IP addresses in the ROUTOPTS parameter.

TOPOLOGY PORTNUMBER set the same as SERVOPTS
PORTNUMBER
The PORTNUMBER in SERVOPTS is used for JSC and OPC Connector. If the
TOPOLOGY PORTNUMBER is set to the same value as the SERVOPTS
PORTNUMBER, E2E processing will still work, but errors will occur when starting
the OPC Connector. We did a test with the parmlib member for the E2E Server
containing the values shown in Example 5-30 on page 340.




                          Chapter 5. End-to-end implementation scenarios and examples   339
Example 5-30 TOPOLOGY and SERVOPTS PORTNUMBER are the same
                SERVOPTS   SUBSYS(O82C)
                   PROTOCOL(E2E,JSC)
                   PORTNUMBER(446)
                TOPOLOGY PORTNUMBER(446)


                The OPC Connector got the error messages shown in Example 5-31 and the
                JSC would not function.

                Example 5-31 Error message for the OPC Connector
                GJS0005E Cannot load workstation list. Reason: EQQMA11E Cannot allocate
                connection
                EQQMA17E TCP/IP socket I/O error during Recv() call for "Socketlmpl<Binding=
                dns name/ip address,port=446,localport=4699>" failed with error"
                10054=Connection reset by peer


                For the OPC connector and JSC to work again, it was necessary to change the
                TOPOLOGY PORTNUMBER to a different value (not equal to the SERVOPTS
                PORTNUMBER) and cycle the E2E Server task. Note that this problem could
                occur if the JSC and E2E PROTOCOL functions were implemented in separate
                tasks (one task E2E only, one task JSC only) if the two PORTNUMBER values
                were set to the same value.


5.8.7 Cannot switch to new Symphony file (EQQPT52E) messages
                The EQQPT52E message, with text as shown in Example 5-32, can be a difficult
                one for troubleshooting as there are several different possible causes.

                Example 5-32 EQQPT52E message
                EQQPT52E Cannot switch to the new symphony file:
                run numbers of Symphony (x) and CP (y) aren't matching


                The x and y in the example message would be replaced by the actual run
                number values. Sometimes the problem is resolved by running a Symphony
                Renew or CP REPLAN (or CP EXTEND) job. However, there are some other
                things to check if this does not correct the problem:
                    The EQQPT52E message can be caused if new FTA workstations are added
                    via the Tivoli Workload Scheduler for z/OS dialog, but the TOPOLOGY parms
                    are not updated with the new CPUREC information. In this case, adding the
                    TOPOLOGY information and running a CP batch job should resolve the
                    problem.




340   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
EQQPT52E can also occur if there are problems with the ID used to run the
   CP batch job or the E2E Server task. One clue that a user ID problem is
   involved with the EQQPT52E message is if, after the CP batch job completes,
   there is still a file in the WRKDIR whose name is Sym plus the user ID that the
   CP batch job runs under. For example, if the CP EXTEND job runs under ID
   TWSRES9, the file in the WRKDIR would be named SymTWSRES9. If
   security were set up correctly, the SymTWSRES9 file would have been
   renamed to Symnew before the CP batch job ended.

If the cause of the EQQPT52E still cannot be determined, add the DIAGNOSE
statements in Example 5-33 to the parm file indicated.

Example 5-33 DIAGNOSE statements added
(1) CONTROLLER: DIAGNOSE NMMFLAGS('00003000')
(2) BATCH (CP EXTEND): DIAGNOSE PLANMGRFLAGS('00040000')
(3) SERVER : DIAGNOSE TPLGYFLAGS(X'181F0000')


Then collect this list of documentation for analysis.
   Controller and server EQQMLOGs
   Output of the CP EXTEND (EQQDNTOP) job
   EQQTWSIN and EQQTWSOU files
   USS stdlist/logs directory (or a tar backup of the entire WRKDIR




                        Chapter 5. End-to-end implementation scenarios and examples   341
342   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
A


  Appendix A.     Connector reference
                  In this appendix, we describe the commands related to the IBM Tivoli Workload
                  Scheduler and IBM Tivoli Workload Scheduler for z/OS connectors. We also
                  describe some Tivoli Management Framework commands related to the
                  connectors.




© Copyright IBM Corp. 2004                                                                  343
Setting the Tivoli environment
                To use the commands described in this appendix, you must first set the Tivoli
                environment. To do this, log in as root or administrator, then enter one of the
                commands shown in Table A-1.
                Table A-1 Setting the Tivoli environment
                  Shell            Command to set the Tivoli environment

                  sh or ksh        . /etc/Tivoli/setup_env.sh

                  csh              source /etc/Tivoli/setup_env.csh

                  DOS              %SYSTEMROOT%system32driversetcTivolisetup_env.cmd
                  (Windows)        bash



Authorization roles required
                To manage connector instances, you must be logged in as a Tivoli administrator
                with one or more of the roles listed in Table A-2.
                Table A-2 Authorization roles required for working with connector instances
                  An administrator with this role...              Can perform these actions

                  user                                            Use the instance, view instance settings

                  admin, senior, or super                         Use the instance, view instance settings,
                                                                  create and remove instances, change
                                                                  instance settings, start and stop instances


                  Note: To control access to the scheduler, the TCP/IP server associates each
                  Tivoli administrator to a Remote Access Control Facility (RACF) user. For this
                  reason, a Tivoli administrator should be defined for every RACF user. For
                  additional information, refer to Tivoli Workload Scheduler V8R1 for z/OS
                  Customization and Tuning, SH19-4544.



Working with Tivoli Workload Scheduler for z/OS
connector instances
                This section describes how to use the wopcconn command to create and manage
                Tivoli Workload Scheduler for z/OS connector instances.




344   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Much of the following information is excerpted from the IBM Tivoli Workload
         Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257.


The wopcconn command
         Use the wopcconn command to create, remove, and manage Tivoli Workload
         Scheduler for z/OS connector instances. This program is downloaded when you
         install the connector. Table A-3 describes how to use wopcconn in the command
         line to manage connector instances.

          Note: Before you can run wopcconn, you must set the Tivoli environment. See
          “Setting the Tivoli environment” on page 344.

         Table A-3 Managing Tivoli Workload Scheduler for z/OS connector instances
          If you want to...            Use this syntax

          Create an instance           wopcconn -create [-h node] -e instance_name -a
                                       address -p port

          Stop an instance             wopcconn -stop -e instance_name | -o object_id

          Start an instance            wopcconn -start -e instance_name | -o object_id

          Restart an instance          wopcconn -restart -e instance_name | -o object_id

          Remove an instance           wopcconn -remove -e instance_name | -o object_id

          View the settings of an      wopcconn -view -e instance_name | -o object_id
          instance

          Change the settings of an    wopcconn -set -e instance_name | -o object_id [-n
          instance                     new_name] [-a address] [-p port] [-t trace_level] [-l
                                       trace_length]

            node is the name or the object ID (OID) of the managed node on which you
            are creating the instance. The TMR server name is the default.
            instance_name is the name of the instance.
            object_id is the object ID of the instance.
            new_name is the new name for the instance.
            address is the IP address or host name of the z/OS system where the Tivoli
            Workload Scheduler for z/OS subsystem to which you want to connect is
            installed.
            port is the port number of the OPC TCP/IP server to which the connector
            must connect.



                                                           Appendix A. Connector reference     345
trace_level is the trace detail level from 0 to 5. trace_length is the maximum
                    length of the trace file.You can also use wopcconn in interactive mode. To do
                    this, just enter the command, without arguments, in the command line.

                Example
                We used a z/OS system with the host name twscjsc. On this machine, a TCP/IP
                server connects to port 5000. Yarmouth is the name of the TMR managed node
                where we installed the OPC connector. We called this new connector instance
                twsc.

                With the following command, our instance has been created:
                    wopcconn -create -h yarmouth -e twsc -a twscjsc -p 5000

                You can also run the wopcconn command in interactive mode. To do this, perform
                the following steps:
                1. At the command line, enter wopcconn with no arguments.
                2. Select choice number 1 in the first menu.
                Example: A-1 Running wopcconn in interactive mode
                Name                          :   TWSC
                Object id                     :   1234799117.5.38#OPC::Engine#
                Managed node                  :   yarmouth
                Status                        :   Active

                OPC version                   : 2.3.0

                2. Name                       : TWSC

                3. IP Address or Hostname: TWSCJSC
                4. IP portnumber         : 5000
                5. Data Compression      : Yes

                6. Trace Length               : 524288
                7. Trace Level                : 0

                0. Exit




Working with Tivoli Workload Scheduler connector
instances
                This section describes how to use the wtwsconn.sh command to create and
                manage Tivoli Workload Scheduler connector instances.



346   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
For more information, refer to IBM Tivoli Workload Scheduler Job Scheduling
         Console User’s Guide, Feature Level 1.3, SC32-1257.


The wtwsconn.sh command
         Use the wtwsconn.sh utility to create, remove, and manage connector instances.
         This program is downloaded when you install the connector.

          Note: Before you can run wtwsconn.sh, you must set the Tivoli environment.
          See “Setting the Tivoli environment” on page 344.

         Table 5-2 How to manage Tivoli Workload Scheduler for z/OS connector instances
          If you want to...            Use this syntax

          Create an instance           wtwsconn.sh -create [-h node]-n instance_name -t
                                       twsdir

          Stop an instance             wtwsconn.sh -stop -n instance | -t twsdir

          Remove an instance           wtwsconn.sh -remove -n instance_name

          View the settings of an      wtwsconn.sh -view -n instance_name
          instance

          Change the Tivoli Workload   wtwsconn.sh -set -n instance_name -t twsdir
          Scheduler home directory
          of an instance

            node specifies the node where the instance is created. If not specified, it
            defaults to the node from which the script is run.
            instance is the name of the new instance. This name identifies the engine
            node in the Job Scheduling tree of the Job Scheduling Console. The name
            must be unique within the Tivoli Managed Region.
            twsdir specifies the home directory of the Tivoli Workload Scheduler engine
            that is associated with the connector instance.

         Example
         We used a Tivoli Workload Scheduler for z/OS with the host name twscjsc. On
         this machine, a TCP/IP server connects to port 5000. Yarmouth is the name of
         the TMR managed node where we installed the Tivoli Workload Scheduler
         connector. We called this new connector instance Yarmouth-A.

         With the following command, our instance has been created:
            wtwsconn.sh -create -h yarmouth -n Yarmouth-A -t /tivoli/TWS/




                                                           Appendix A. Connector reference   347
Useful Tivoli Framework commands
                These commands can be used to check your Framework environment. Refer to
                the Tivoli Framework 3.7.1 Reference Manual, SC31-8434, for more details.

                wlookup -ar ProductInfo lists the products that are installed on the Tivoli server.

                wlookup -ar PatchInfo lists the patches that are installed on the Tivoli server.

                wlookup -ar MaestroEngine lists the instances of this class type (same for the
                other classes).

                For example:
                    barb 1318267480.2.19#Maestro::Engine#

                The number before the first period (.) is the region number and the second
                number is the managed node ID (1 is the Tivoli server). In a multi-Tivoli
                environment, you can determine where a particular instance is installed by
                looking at this number because all Tivoli regions have a unique ID.

                wuninst -list lists all products that can be un-installed.

                wuninst {ProductName}-list lists the managed nodes where a product is
                installed.

                wmaeutil Maestro -Version lists the versions of the installed engine, database,
                and plan.

                wmaeutil Maestro -dbinfo lists information about the database and the plan.

                wmaeutil Maestro -gethome lists the installation directory of the connector.




348   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Related publications

                 The publications listed in this section are considered particularly suitable for a
                 more detailed discussion of the topics covered in this redbook.



IBM Redbooks
                 For information on ordering these publications, see “How to get IBM Redbooks”
                 on page 350. Note that some of the documents referenced here may be available
                 in softcopy only.
                     End-to-End Scheduling with OPC and TWS Mainframe and Distributed
                     Environment, SG24-6013
                     End-to-End Scheduling with Tivoli Workload Scheduler 8.1, SG24-6022
                     High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli
                     Framework, SG24-6632
                     IBM Tivoli Workload Scheduler Version 8.2: New Features and Best
                     Practices, SG24-6628
                     Implementing TWS Extended Agent for Tivoli Storage Manager, GC24-6030
                     TCP/IP in a Sysplex, SG24-5235



Other publications
                 These publications are also relevant as further information sources:
                     IBM Tivoli Management Framework 4.1 User’s Guide, GC32-0805
                     IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes,
                     Feature level 1.3, SC32-1258
                     IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide,
                     Feature Level 1.3, SC32-1257
                     IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273
                     IBM Tivoli Workload Scheduler Reference Guide, SC32-1274
                     IBM Tivoli Workload Scheduler Release Notes Version 8.2 (Maintenance
                     Release April 2004) SC32-1277
                     IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265



© Copyright IBM Corp. 2004. All rights reserved.                                                  349
IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264
                 IBM Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263
                 IBM Tivoli Workload Scheduler for z/OS Messages and Codes, Version 8.2
                 (Maintenance Release April 2004), SC32-1267
                 IBM Tivoli Workload Scheduling Suite General Information Version 8.2,
                 SC32-1256
                 OS/390 V2R10.0 System SSL Programming Guide and Reference,
                 SC23-3978
                 Tivoli Workload Scheduler for z/OS Installation Guide, SH19-4543
                 z/OS V1R2 Communications Server: IP Configuration Guide, SC31-8775



Online resources
              These Web sites and URLs are also relevant as further information sources:
                 IBM Tivoli Workload Scheduler publications in PDF format
                 http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html
                 Search for IBM fix packs
                 http://www.ibm.com/support/us/all_download_drivers.html
                 Adobe (Acrobat) Reader
                 http://www.adobe.com/products/acrobat/readstep2.html



How to get IBM Redbooks
              You can search for, view, or download Redbooks, Redpapers, Hints and Tips,
              draft publications, and Additional materials, as well as order hardcopy Redbooks
              or CD-ROMs, at this Web site:
                 ibm.com/redbooks



Help from IBM
              IBM Support and downloads
                 ibm.com/support

              IBM Global Services
                 ibm.com/services



350   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Abbreviations and acronyms
ACF                  Advanced Communications           PDS       Partitioned data set
                     Function                          PID       Process ID
API                  Application Programming           PIF       Program interface
                     Interface
                                                       PSP       Preventive service planning
ARM                  Automatic Restart Manager
                                                       PTF       Program temporary fix
COBRA                Common Object Request
                     Broker Architecture               RACF      Resource Access Control
                                                                 Facility
CP                   Control point
                                                       RFC       Remote Function Call
DM                   Domain manager
                                                       RODM      Resource Object Data
DVIPA                Dynamic virtual IP address                  Manager
EM                   Event Manager                     RTM       Recovery and Terminating
FTA                  Fault-tolerant agent                        Manager
FTW                  Fault-tolerant workstation        SCP       Symphony Current Plan
GID                  Group Identification Definition   SMF       System Management Facility
GS                   General Service                   SMP       System Modification Program
GUI                  Graphical user interface          SMP/E     System Modification
HFS                  Hierarchical File System                    Program/Extended

IBM                  International Business            STLIST    Standard list
                     Machines Corporation              TMF       Tivoli Management
ISPF                 Interactive System                          Framework
                     Productivity Facility             TMR       Tivoli Management Region
ITSO                 International Technical           TSO       Time-sharing option
                     Support Organization              TWS       IBM Tivoli Workload
ITWS                 IBM Tivoli Workload                         Scheduler
                     Scheduler                         TWSz      IBM Tivoli Workload
JCL                  Job control language                        Scheduler for z/OS
JES                  Job Execution Subsystem           USS       UNIX System Services
JSC                  Job Scheduling Console            VIPA      Virtual IP address
JSS                  Job Scheduling Services           VTAM      Virtual Telecommunications
MN                   Managed nodes                               Access Method

NNM                  Normal Mode Manager               WA        Workstation Analyzer

OMG                  Object Management Group           WLM       Workload Monitor

OPC                  Operations, planning, and         X-agent   Extended agent
                     control                           XCF       Cross-system coupling facility



© Copyright IBM Corp. 2004. All rights reserved.                                          351
352   End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
Index
                                                   busiest server 55
Symbols                                            business processing cycles 36
$framework 330


Numerics                                           C
                                                   CA7 55
24/7 availability 2
                                                   caching mechanism 14
8.2-TWS-FP04 207
                                                   CALENDAR() 248
                                                   calendars 35
A                                                  catalog a dataset 45
Access rights 325                                  cataloged procedures 45
ACF/VTAM connections 29                            central repository 322
active engine 29                                   centralized dependency 14
active TCP/IP stack 131                            centralized script
ad hoc prompts 88                                      overview 73
alter long-term plan 38                            centralized script definition rules 11
APAR list 119                                      Centralized Script Library Management 10
APAR PQ80341 123                                   Centralized Scripting 10
APPC communication 173                             centralized scripts 11
APPC ROUTOPTS 117                                  certificates 21
APPC server 31                                     changes to your production workload 37
APPL 18                                            classical tracker agent environment 280
application 33                                     CLIST 47
AS/400 job 2                                       CODEPAGE 180
audit 252                                          common interface 6
audit trail 49                                     communication layer 155
auditing file 321                                  communications between workstations 51
automate operator activities 5                     Comparison expression 19
automatic job tailoring 37                         component group 210
Automatic Recovery 11, 33, 221                     Composer 329
Automatic Restart Manager 46                       composer create 299
availability of resources 32                       compressed form 146
                                                   compression 146
                                                   computer-processing tasks 33
B                                                  conman 329
backup domain manager 84–85
                                                   conman start command 333
   overview 55
                                                   Connector 23, 50
Bad Symphony 104
                                                   connector instance 92
batch workload 4
                                                   connector instances
batch-job skeletons 166
                                                       overview 94
batchman 57
                                                   Connector reference 343
batchman process 20
                                                   Connectors
BATCHOPT 18, 196
                                                       overview 91
best restart step 46
                                                   Controlled z/OS systems 28
binary security file 329



© Copyright IBM Corp. 2004. All rights reserved.                                              353
controller 2, 4–5                                    dataset triggering 36
controlling system 28                                date calculations 32
conversion 301                                       deadline 20
correct processing order 32                          deadline time 19–20
CP backup copy 127                                   decompression 146
CP extend 104, 340                                   default user name 208
CP REPLAN 340                                        delete a dataset 45
CPU class definitions 300                            dependencies 2
CPU type 190                                             file 88
CPUACCESS 190                                        dependencies between jobs 41
CPUAUTOLNK 190                                       dependency
CPUDOMAIN 20, 190                                        job level 89
CPUFULLSTAT 191, 307                                     job stream level 89
CPUHOST 190                                          dependency object 6
CPULIMIT 193                                         dependency resolution 22
CPUNODE 20, 190                                      developing extended agents 276
CPUOS 20, 190                                        direct TCP/IP connection 20
CPUREC 20, 175, 184                                  Distributed topology 68
CPUREC definition 20                                 DM
CPUREC statement 20                                      See domain manager
CPURESDEP 191                                        documentation 117
CPUSERVER 192                                        domain manager 6–7, 21, 53, 337
CPUTCPIP 190, 337                                        overview 55
CPUTYPE 20, 190                                      domain topology 184
CPUTZ 193                                            DOMPARENT parameter 198
CPUTZ keyword 194                                    DOMREC 85, 175, 184
CPUUSER 194                                          download a centralized script 73
Create Instance 254                                  dummy end 89
critical jobs 48                                     dummy jobs 334
cross-system coupling facility 30                    dummy start 89
current plan 4, 12                                   dumpsec 328
customize script 209                                 DVIPA VIPARANGE 136
customized calendars 32                              Dynamic Virtual IP Addressing
customizing                                              See DVIPA
     DVIPA 305                                       Dynix 209
     IBM Tivoli Workload Scheduler for z/OS backup
     engines 304
     Job Scheduler Console 255
                                                     E
                                                     e-commerce 2
     security file 325
                                                     ENABLELISTSECCHK 180
     Tivoli environment 246, 344
                                                     Ended-in-error 73
     Tivoli Workload Scheduler for z/OS 162
                                                     end-to-end database objects
     work directory 170
                                                        overview 69
cutover 279
                                                     end-to-end enabler component 115
                                                     end-to-end environment 20
D                                                    end-to-end event data set 170
daily planning batch jobs 273                        end-to-end fail-over scenarios
data integrity 49                                         303
database changes 76                                  END-TO-END FEATURE 163



354     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
end-to-end network 14, 17, 20                       Event Manager 66
end-to-end plans                                    extend long-term plan 38
   overview 75                                      Extend of current plan 40
end-to-end scheduling xi, 3–4, 7–8, 146             extend plan 213
   conversion process 289                           extended agent 7, 190
   creating Windows user and password definitions      overview 55
   272                                              extended agent method 55
   education 298                                    extended plan 86
   file dependencies 331                            extension of the current plan 39
   firewall support 20                              external dependency 35
   guidelines for coversion 299
   implementation scenarios 265
   migrating backwards 288
                                                    F
                                                    fault tolerance 6, 14, 56, 322
   migration actions 278
                                                    fault-tolerant agent 5–7, 20, 22, 58, 64
   migration checklist 277
                                                         backup 319
   migration planning 276
                                                         definitions 69
   our environment 266
                                                         fault-tolerant workstation 22
   password consideration 285
                                                         installing 279
   planning 111
                                                         local copy 22
   previous release of OPC 158
                                                         naming conventions 146
   rationale behind conversion 112
                                                         overview 55
   run number 216, 332
                                                         security 323
   TCP/IP considerations 129
                                                    fault-tolerant architecture 129
   tips and tricks 331
                                                    fault-tolerant job 18
   verify the conversion 299
                                                    fault-tolerant workstation 216
   what it is 2
                                                    file dependencies 88
end-to-end scheduling network 50
                                                    file transfer 33
end-to-end scheduling solution 3
                                                    filewatch options 331
end-to-end script library 128
                                                    filewatch program 88
end-to-end server 10, 61–63, 65
                                                    filewatch.sh 331
end-to-end topology statements 174
                                                    final cutover 287
EQQ prefix 203
                                                    firewall environment 20
EQQA531E 286
                                                    FIREWALL option 20
EQQMLOG 205–206
                                                    firewall support 20
EQQPARM members 317
                                                    firewalls 113
EQQPCS05 job 172
                                                    first-level domain manager 8, 62, 133
EQQPDF 119
                                                    fix pack 140
EQQPDFEM member 123
                                                         FixPack 04 151
EQQPT56W 206
                                                         JSC 150
EQQSCLIB DD statement 9
                                                    forecast future workloads 37
EQQSERP member 247
                                                    FTA
EQQTWSCS dataset 11
                                                         See fault-tolerant agent
EQQTWSIN 65, 127
                                                    FTP 288
EQQTWSOU 127, 168
                                                    FTW jobs 234–235
EQQUX001 11, 285
EQQUX002 282
EQQWMIGZ 282                                        G
establish a reconnection 133                        gcomposer 153
event files 57                                      gconman 153



                                                                                          Index   355
General Service 66                                      318
generation data group 46                                backup domain manager 147
globalopts 172                                          benefits of integrating with ITWS for z/OS 7
GRANTLOGONASBATCH 180                                   central repositories 322
                                                        creating TMF Administrators 257
                                                        database files 6
H                                                       definition 7, 22
HACMP 275
                                                        dependency resolution 22
Hewlett-Packard 5
                                                        engine 22
HFS
                                                        Extended Agents 7
   See Hierarchical File System
                                                        fault-tolerant workstation 22
High Availability Cluster Multi-Processing
                                                        four tier network 53
   See HACMP
                                                        installing 207
high availability configurations
                                                        installing an agent 207
   Configure backup domain manager 306
                                                        installing and configuring Tivoli Framework 245
   Configure dynamic VIPA 305
                                                        installing Job Scheduling Services 253–254
   DNS 303
                                                        installing multiple instances 207
   hostname file 303
                                                        introduction 5
   IBM Tivoli Workload Scheduler for z/OS backup
                                                        Job Scheduling Console 2
   engine 303
                                                        maintenance 156
   stack affinity 303
                                                        master domain manager 58
   VIPA 303
                                                        MASTERDM 22
   VIPA definitions 306
                                                        monitoring file systems 321
HIGHDATE() 248
                                                        multi-domain configuration 52
highest return code 233
                                                        naming conventions 146
home directory 209
                                                        network 6
host CPU 191
                                                        overview 5
host jobs 18
                                                        plan 5, 58
host name 131
                                                        processes 57
hostname parameter 133
                                                        production day 58
Hot Standby function 30, 130
                                                        scheduling engine 22
housekeeping job stream 38
                                                        script files 322
HP Service Guard 275
                                                        security files 323
HP-UX 209
                                                        single domain configuration 51
HP-UX PA-RISC 254–255, 261
                                                        software ordering 116
                                                        terminology 21
I                                                       unison directory 208
IBM 22–23                                               UNIX code 23
IBM AIX 254–255, 261                                    user 209
IBM mainframe 3                                      IBM Tivoli Workload Scheduler 8.1 10
IBM Tivoli Business Systems Manager 276              IBM Tivoli Workload Scheduler 8.1 suite 2
IBM Tivoli Enterprise Console 321                    IBM Tivoli Workload Scheduler connector 91
IBM Tivoli Management Framework                      IBM Tivoli Workload Scheduler Distributed 10, 14,
   overview 90                                       22
IBM Tivoli Monitoring 321                               overview 5
IBM Tivoli Workload Scheduler 5–6                    IBM Tivoli Workload Scheduler for z/OS 5
   architecture 6                                       architecture 4
   auditing log files 321                               backup engines 303
   backup and maintenance guidelines for FTAs           benefits of integrating with ITWS 7



356     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
controller 4, 23                    IBM Tivoli Workload Scheduler for z/OS 8.2
creating TMF administrators 257         centralized control 7
database 4                              Centralized Script Library Management 10
end-to-end dataset allocation 168       enhancements 8
end-to-end datasets and files           improved job log retrieval performance 10
     EQQSCLIB 68                        improved SCRIPTLIB parser 9
     EQQTWSCS 67                        multiple first-level domain managers 8
     EQQTWSIN 67                        recovery actions available 15
     EQQTWSOU 67                        recovery for not centralized jobs 14
     intercom.msg 67                    Return Code mapping 18
     Mailbox.msg 67                     security enhancements 20
     NetReq.msg 67                      variable substitution for not centralized jobs 17
     Sinfonia 67                    IBM Tivoli Workload Scheduler for z/OS connector
     Symphony 67                    92
     tomaster.msg 67                IBM Tivoli Workload Scheduler processes
     Translator.chk 68                  batchman 57
     Translator.wjl 68                  intercommunication 57
end-to-end FEATURE 163                  jobman 57
engine 23                               mailman 57
EQQJOBS installation aid 162            netman 57
EQQPDF 119                              writer 57
fail-over scenarios 303             identifying dependencies 32
HFS Installation Directory 163      idle time 5
HFS Work Directory 163              impersonation support 276
hot standby engines 303             incident dataset 47
installing 159                      input datasets 168
introduction 4                      installing
long term plan 7                        allocate end-to-end datasets 168
overview 4                              connectors 254
PSP Upgrade and subset id 118–119       END-TO-END FEATURE 163
Refresh CP group 164                    FTAs 207
server processes                        IBM Tivoli Workload Scheduler for z/OS 159
     batchman 64                        Installation Directory 163
     input translator 65                Job Scheduling Console 261
     input writer 65                    Job Scheduling Services 253
     job log retriever 65               latest fix pack 140
     mailman 64                         multiple instances of FTAs 207
     netman 63                          OPC tracker agents 117
     output translator 66               Refresh CP group 164
     receiver subtask 66                service updates 117
     script downloader 65               TCP/IP Server 246
     sender subtask 66                  Tivoli Management Framework 253
     starter 65                         Tivoli Management Framework 3.7B 253
     translator 64                      Tivoli Management Framework 4.1 252
     writer 64                          User for OPC address space 164
switch manager 318                      Work Directory 163
tracker 4                           instance 347
user for OPC address space 164      in-stream JCL 45
VIPA 303                            Intercom.msg 64



                                                                           Index    357
internal dependency 35                                   overview 91
INTRACTV 15                                              See JSS
IRIX 209                                             job statistics 5
ISPF 11                                              job stream 32–33, 36, 69, 81, 331
ISPF panels 5                                        job stream run cycle 40
Itanium 209                                          job submission 4
ITWS                                                 job tailoring 42
    See IBM Tivoli Workload Scheduler                job tracking 41
ITWS for z/OS                                        job_instance_output 98
    See IBM Tivoli Workload Scheduler for z/OS       JOBCMD 11, 15
                                                     job-completion checking 47
                                                     jobdef.txt 300
J                                                    job-level restart 46
Java GUI interface 5
                                                     JOBLIB 11
Java Runtime Environment Version 1.3 262
                                                     jobman 57
JCL 282
                                                     JOBREC parameters 16
JCL Editing 11
                                                     JOBSCR 15
JCL variables 37, 221
                                                     jobstream.txt 300
JES2 30
                                                     JOBUSR 15
JES3 30
                                                     JOBWS 15
job 2, 89, 94
                                                     JSC
job control process 57
                                                         See Job Scheduling Console
Job Instance Recovery Information panel 16
                                                     JSC migration considerations 151
job log retriever 10
                                                     JSC server 89
Job Migration Tool 282
                                                     JSC server initialization 247
job return code 19
                                                     JSCHOSTNAME() 248
job scheduling 2
                                                     JSS 23
Job Scheduling Console 2, 5, 11
                                                     JTOPTS statement 169
    add users’ logins 257
                                                     JTOPTS TWSJOBNAME 200
    availability 153
                                                     Julian months 37
    compatibility 151
    creating connector instances 255
    creating TMF administrators 257                  L
    documentation 150                                Language Environment 135
    fix pack 150                                     late job handling 19
    hardware and software prerequisites 262          legacy GUI 153
    installation on AIX 263                          legacy ISPF panels 80
    installation on Sun Solaris 263                  legacy system 2
    installation on Windows 263                      Linux Red Hat 261
    installing 261                                   Linux Red Hat 7.1 254–255, 261
    installing Job Scheduling Services 253           localopts 146, 336–337
    Job Scheduling Console, Version 1.3 261          localopts file 21
    login window 264                                 LOGLINES 181
    migration considerations 151                     long-term plan 4, 37–38
    overview 89                                      long-term plan simulation reports 38
    required TMR roles 260                           long-term switch 308
Job Scheduling Console commands 28                   loop 15
Job Scheduling Console, Version 1.3 261              loss of communication 6
Job Scheduling Services                              LTP Modify batch job 76



358     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
M                                               netman 57
Maestro 22                                      Netman configuration file 172
maestro_database 98                             netman process 63
maestro_engine 97                               NetReq.msg 63
maestro_plan 97                                 NetView for OS/390 44
maestro_x_server 98                             network listener program 57
Mailbox.msg 64                                  network traffic 52
mailman 57                                      network traffic generated 56
mailman cache 144–145                           network writer process 57
maintenance strategy 156                        new functions related to performance and scalability
maintenance windows 34                              overview 8
Maintenence release 207                         new input writer thread 10
makesec command 329–330                         new job definition syntax 14
managed node 91                                 new operation 12
management hub 51, 55                           new parser 10
manual control 42                               new plan 58
manual editing of jobs 42                       newly allocated dataset 169
manual task 33                                  next production day 33
master 6, 22                                    nm ipvalidate=full 338
master domain manager 6, 22, 56, 60, 131, 143   nm ipvalidate=none 338
   overview 54                                  NM PORT 337
MASTERDM 22                                     NOERROR functionality 18
MCP dialog 17                                   non-centralized script 72
MDM Symphony file 53                                overview 72
message management process 57                   non-FTW operations 12
migrating backwards 288                         non-mainframe schedulers 7
migration                                       NOPTIMEDEPENDENCY 181
   actions 278                                  Normal Mode Manager 66
   benefits 274                                 notify changes 60
   planning 276                                 NT user definitions 328
   planning for 155
migration checklist 277
                                                O
migration tool 285                              Object attributes 325
missed deadline 19, 35                          OCCNAME 200
mm cache mailbox 145                            offline workstations 332
mm cache size 145                               offset 37
Modify all 40                                   Old Symphony 104
mozart directory 172                            old tracker agent jobs 11
MSCS 318                                        OPC 4–5
multiple calendar function 36                   OPC connector 92
multiple domains 56                             OPC tracker agent 11, 117
multiple tracker agents 280                     opc_connector 96
                                                opc_connector2 96
N                                               OPCMASTER 8, 21, 207
national holidays 36                            OPCOPTS 304
NCP VSAM data set 170                           OpenSSL toolkit 21
nested INCLUDEs 46                              Operations Planning and Control
Netconf 172                                          See OPC



                                                                                      Index    359
operations return code 18                               See PSP
operator instruction 37                             primary domain manager 8
operator intervention 5                             printing of output 34
Oracle 2, 7                                         priority 0 335
Oracle Applications 55                              private key 21
organizational unit 56                              processing day 5
OS/390 4, 7                                         production control process 57
OS/400 22                                           production day 6, 58
oserv program 97                                    production schedule 4
OSUF 73                                             Program Directory 118
out of sync 255                                     Program Interface 5, 47
output datasets 168                                 PSP 118
overhead 146                                        PTF U482278 253
own copy of the plan 6
                                                    R
P                                                   R/W mode 127
Parallel Sysplex 29                                 r3batch 98
parallel testing 286                                RACF user 257
parent domain manager 22                            RACF user ID 250
parms command 302                                   range of IP address 136
partitioned dataset 128                             RCCONDSUC 15, 19
PDSE dataset 11                                     recovery actions 15
PeopleSoft 55                                       recovery information 17
performance bottleneck 8                            recovery job 15
performance improvements over HFS 126               recovery of jobs 45
performance-related parameters                      recovery option 240
    mm cache size 145                               recovery process 5
    sync level 145                                  recovery prompt 15
    wr enable compression 146                       recovery prompts 88
periods (business processing cycles) 36             RECOVERY statement 15, 17
pervasive APARs 118                                 Red Hat 7.2 275
PIF                                                 Red Hat 7.3 275
    See Program Interface                           Red Hat Linux for S/390 254
PIF applications 31                                 Redbooks Web site 350
plan 37                                                 Contact us xiii
plan auditing 321                                   Refresh CP group field 165
plan extension 85                                   remote panels 28
plan file 22                                        remote systems 30–31
plan process 37                                     remote z/OS system 30
PLANAUDITLEVEL 181, 321                             Removable Media Manager 46
pobox directory 172                                 repeat range 89
port 31111 63                                       replan 213
port number 182                                     reporting errors 47
PORTNUMBER 182                                      re-queuing SYSOUT 47
predecessor 4, 331                                  rerun a job 45
predefined JCL variables 37                         rerun from 335
predefined periods 36                               rerun jobs 89
preventive service planning                         RESERVE macro 169



360    End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
reserved resource class 49          short-term switch 308
resource serialization support 36   Sinfonia 79, 146
resources 36                        Sinfonia file 146
resources availability 39           single point of control 7
restart and cleanup 45              size of working directory 183
return code 19                      slow WAN 146
Return Code mapping 18              SMF 30
return code mapping                 software ordering details 115
     overview 18                    special resource dependencies 71
rmstdlist 320                       special resources 36
roll over 6                         special running instructions 37
rolling plan 39                     SSL protocol 21
routing events 64                   SSLLEVEL 182, 194
rule 37                             SSLPORT 182
run cycle 32, 36                    standard agent 55
                                        overview 55
                                    standby controller engine 84
S                                   standby engine 29, 84, 86
SAF interface 49
                                    start and stop commands 89
SAGENT 190
                                    start of each day 6
sample security file 327
                                    started tasks 61
SAP R/3 55
                                    starter process 273
SAP R/3 extended agent 98
                                    StartUp command 57
SAP/3 2
                                    start-up messages 206
scalability 8
                                    status information 30
schedule 81
                                    status inquiries 42
scheduling 2
                                    status of all jobs 6
scheduling engine 22
                                    status of the job 59
script library 83
                                    status reporting 47
SCRIPTLIB 11
                                    stdlist 337
SCRIPTLIB library 10
                                    stdlist files 126, 164
SCRIPTLIB parser 9
                                    step-level restart 46
SCRPTLIB dataset 9
                                    steps of a security check 326
SCRPTLIB member 17
                                    submit jobs 4
SD37 abend code 169
                                    submitting userid 139
security 48
                                    subordinate agents 22
security enhancements
                                    subordinate domain manager 58
    overview 20
                                    subordinate FTAs 6
security file stanzas 325
                                    subordinate workstations 60
SEQQMISC library 123
                                    substitute JCL variables 231
serialize work 36
                                    success condition 18
server 10, 23
                                    successful condition 19
server started task 174
                                    SuSE Linux Enterprise Server 254
service level agreement 38
                                    SuSE Linux Enterprise Server for S/390 254
service updates 117
                                    switch manager 318
Set Logins 258
                                    Switching domain manager 85, 313
Set TMR Roles 260
                                        backup manager 308
shared DASD 30
                                        long-term switch 308
shell script 321
                                        short-term switch 308



                                                                      Index      361
using switchmgr 313                              time depending jobs 39
    using the Job Scheduling Console 310             time of day 32
    verifying 314                                    time zone 194
switching domain manager                             tips and tricks 331
    using WSSTAT 313                                     backup and maintenance guidelines on FTAs
switchmgr 313                                            318
Symbad 104                                               central repositories 322
Symnew 341                                               common errors for jobs 334
Symold 104                                               dummy jobs 334
Symphony file 10, 17, 20, 52, 57–58, 82–83, 182,         file dependencies 331
216, 321                                                 filewatch.sh program 331
    creation 58                                          job scripts in the same directories 334
    distribution 59                                      monitoring example 321
    monitoring 59                                        monitoring file systems 321
    renew 273                                            plan auditing 321
    run number 68                                        script files 322
    sending to subordinate workstations 64               security files 323
    switching 67                                         stdlist files 319
    troubleshooting 340                                  unlinked workstations 332
    update 59                                            useful Tivoli Framework commands 348
Symphony file creation time 10                       Tivoli administrator ID 250
Symphony file generation                             Tivoli Framework commands 348
    overview 80                                      Tivoli managed node 28
Symphony renew 104                                   Tivoli Managed Region
Symphony run number 68, 333                              See TMR
SymUSER 79                                           Tivoli Management Environment 90
SymX 104                                             Tivoli Management Framework 23, 257
sync level 145                                       Tivoli Management Framework 3.7.1 253
synchronization 86                                   Tivoli Management Framework 3.7B 253
sysplex 7, 164, 303                                  Tivoli object repository 91
System Authorization Facility 49                     Tivoli server 28
System Automation/390 317                            Tivoli Workload Scheduler for z/OS
System Display and Search Facility 46                    overview 4
system documentation 117                             Tivoli-managed node 264
System SSL services 21                               TMR database 91
                                                     TMR server 91
                                                     top-level domain 22
T                                                    TOPLOGY 76
TABLES keyword 18
                                                     topology 56, 68
TCP/IP considerations 129
                                                     topology definitions 76
     Dynamic Virtual IP Addressing 135
                                                     topology parameter statements 69
     stack affinity 134
                                                     TOPOLOGY PORTNUMBER 337
     use of the host file 133
                                                     TOPOLOGY statement 21, 178
TCP/IP link 28
                                                     TPLGYMEM 183
TCPIPJOBNAME 183
                                                     TPLGYPRM 177
temporarily store a script 11
                                                     tracker 4
terminology 21
                                                     tracker agent enabler component 115
tier-1 platforms 210
                                                     tracker agent jobs 287
tier-2 platfoms 209
                                                     tracker agents 118



362     End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
training 287                                             USRREC definition 336
translator checkpoint file 104                           USS 20, 23, 124
translator log 105                                       USS workdir 21
Translator.chk 104                                       UTF-8 to EBCDIC translation 65
Translator.wjl 104
TRCDAYS 183
trial plans 37
                                                         V
                                                         variable substitution and Job Setup 11
trigger 331
                                                         Variable Substitution Directives 17
troubleshooting
                                                         VARSUB statement 17
     common errors for jobs on fault-tolerant worksta-
                                                         verification test 159
     tions 334
                                                         Version 8.2 enhancements
     DIAGNOSE statements 341
                                                             overview 10
     E2E PORTNUMBER and CPUTCPIP 336
                                                         view long-term plan 38
     EQQPT52E 340
                                                         VIPA 303
     handling errors in script definitions 334
                                                         VIPARANGE 136
     handling offline or unlinked workstations 332
                                                         virtual IP address 135
     handling wrong password definition for Windows
                                                         VSAM dataset 68, 127
     FTW 336
                                                         VTAM Model Application Program Definition feature
     message EQQTT11E 339
                                                         46
     problems with port numbers 336
     SERVOPTS PORTNUMBER 339
     TOPOLOGY PORTNUMBER 338                             W
Tru64 UNIX 209                                           Web browser 2
trusted certification authority 21                       weekly period 36
TSO command 5                                            Windows 22
TSO parser 10                                            Windows 2000 Terminal Services 261
TSO rules 11                                             Windows clustering 275
TWSJOBNAME parameter 169                                 wlookup 348
                                                         wlookup -ar PatchInfo 348
                                                         wmaeutil 348
U                                                        wmaeutil command 329
uncatalog a dataset 45
                                                         wookup -ar ProductInfo 348
Unison Maestro 5
                                                         wopcconn command 345
Unison Software 3, 5
                                                         work directory 164
UNIX 22
                                                         workfiles 125
UNIX System Services
                                                         workload 2
    See USS
                                                         workload forecast 7
unlinked workstations 332
                                                         Workload Manager 48
unplanned work 47
                                                         Workload Manager interface 48
upgrade
                                                         workload priorities 32, 41
    planning for 155
                                                         workstation 34
user attributes 325
                                                         wr enable compression 146
users.txt 300
                                                         wrap-around dataset 169
using dummy jobs 334
                                                         writer 57
USRCPU 196
                                                         writing incident records 47
USRMEM 183
                                                         WRKDIR 183
USRNAM 196
                                                         WSCCLOG.properties 172
USRPWD 196
                                                         WSSTAT command 306
USRREC 175, 184
                                                         wtwsconn.sh command 256, 347



                                                                                             Index    363
wuninst 348


X
XAGENT 190
X-agent method 98


Y
YYYYMMDD_E2EMERGE.log 105


Z
z/OS environment 71
z/OS extended agent 7
z/OS host name 132
z/OS job 2
z/OS security 250
z/OS/ESA Version 4 Release 1 304
zFS clusters 126




364    End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
                                                                    (0.5” spine)
                                                                  0.475”<->0.875”
                                                                 250 <-> 459 pages
End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624
End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624
Back cover                                          ®



End-to-End Scheduling
with IBM Tivoli Workload
Scheduler V 8.2
Plan and implement     The beginning of the new century sees the data center with a
your end-to-end        mix of work, hardware, and operating systems previously         INTERNATIONAL
scheduling             undreamed of. Today’s challenge is to manage disparate          TECHNICAL
environment            systems with minimal effort and maximum reliability. People     SUPPORT
                       experienced in scheduling traditional host-based batch work     ORGANIZATION
                       must now manage distributed systems, and those working in
Experiment with
                       the distributed environment must take responsibility for work
real-life scenarios
                       running on the corporate OS/390 system.
                                                                                       BUILDING TECHNICAL
Learn best practices   This IBM Redbook considers how best to provide end-to-end       INFORMATION BASED ON
and troubleshooting    scheduling using IBM Tivoli Workload Scheduler Version 8.2,     PRACTICAL EXPERIENCE
                       both distributed (previously known as Maestro) and
                       mainframe (previously known as OPC) components.                 IBM Redbooks are developed by
                                                                                       the IBM International Technical
                       In this book, we provide the information for installing the     Support Organization. Experts
                       necessary Tivoli Workload Scheduler 8.2 software                from IBM, Customers and
                       components and configuring them to communicate with each        Partners from around the world
                       other. In addition to technical information, we consider        create timely technical
                       various scenarios that may be encountered in the enterprise     information based on realistic
                                                                                       scenarios. Specific
                       and suggest practical solutions. We describe how to manage      recommendations are provided
                       work and dependencies across both environments using a          to help you implement IT
                       single point of control.                                        solutions more effectively in
                                                                                       your environment.
                       We believe that this book will be a valuable reference for IT
                       specialists who implement end-to-end scheduling with Tivoli
                       Workload Scheduler 8.2.
                                                                                       For more information:
                                                                                       ibm.com/redbooks

                         SG24-6624-00                   ISBN 073849139X

More Related Content

PDF
Implementing ibm tivoli workload scheduler v 8.2 extended agent for ibm tivol...
PDF
Certification guide series ibm tivoli workload scheduler v8.4 sg247628
PDF
Ibm tivoli workload scheduler for z os best practices end-to-end and mainfram...
PDF
Getting started with ibm tivoli workload scheduler v8.3 sg247237
PDF
High availability scenarios with ibm tivoli workload scheduler and ibm tivoli...
PDF
Backing up web sphere application server with tivoli storage management redp0149
PDF
Deployment guide series ibm tivoli configuration manager sg246454
PDF
Deployment guide series ibm tivoli monitoring 6.1 sg247188
Implementing ibm tivoli workload scheduler v 8.2 extended agent for ibm tivol...
Certification guide series ibm tivoli workload scheduler v8.4 sg247628
Ibm tivoli workload scheduler for z os best practices end-to-end and mainfram...
Getting started with ibm tivoli workload scheduler v8.3 sg247237
High availability scenarios with ibm tivoli workload scheduler and ibm tivoli...
Backing up web sphere application server with tivoli storage management redp0149
Deployment guide series ibm tivoli configuration manager sg246454
Deployment guide series ibm tivoli monitoring 6.1 sg247188

What's hot (15)

PDF
Large scale implementation of ibm tivoli composite application manager for we...
PDF
Ibm tivoli storage manager v6.1 technical guide sg247718
PDF
End to-end automation with ibm tivoli system automation for multiplatforms sg...
PDF
Deployment guide series ibm tivoli monitoring express version 6.1 sg247217
PDF
Integrating ibm tivoli workload scheduler with tivoli products sg246648
PDF
Deployment guide series ibm tivoli composite application manager for web sphe...
PDF
Disaster recovery solutions for ibm total storage san file system sg247157
PDF
Ibm tivoli system automation for z os enterprise automation sg247308
PDF
Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...
PDF
Migrating to netcool precision for ip networks --best practices for migrating...
PDF
Ibm tivoli storage manager bare machine recovery for aix with sysback - red...
PDF
An introduction to tivoli net view for os 390 v1r2 sg245224
PDF
Backing up lotus domino r5 using tivoli storage management sg245247
PDF
Deploying rational applications with ibm tivoli configuration manager redp4171
PDF
Slr to tivoli performance reporter for os 390 migration cookbook sg245128
Large scale implementation of ibm tivoli composite application manager for we...
Ibm tivoli storage manager v6.1 technical guide sg247718
End to-end automation with ibm tivoli system automation for multiplatforms sg...
Deployment guide series ibm tivoli monitoring express version 6.1 sg247217
Integrating ibm tivoli workload scheduler with tivoli products sg246648
Deployment guide series ibm tivoli composite application manager for web sphe...
Disaster recovery solutions for ibm total storage san file system sg247157
Ibm tivoli system automation for z os enterprise automation sg247308
Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...
Migrating to netcool precision for ip networks --best practices for migrating...
Ibm tivoli storage manager bare machine recovery for aix with sysback - red...
An introduction to tivoli net view for os 390 v1r2 sg245224
Backing up lotus domino r5 using tivoli storage management sg245247
Deploying rational applications with ibm tivoli configuration manager redp4171
Slr to tivoli performance reporter for os 390 migration cookbook sg245128
Ad

Viewers also liked (8)

PPTX
Mainframe - OPC
PPT
TWS zcentric Proof of Technology (from 2013 European Tour)
PPTX
IBM MQ Series For ZOS
PDF
IBM MobileFirst Platform v7.0 Pot Intro v0.1
PDF
PPTX
OPC PPT
PPTX
Attack monitoring using ElasticSearch Logstash and Kibana
Mainframe - OPC
TWS zcentric Proof of Technology (from 2013 European Tour)
IBM MQ Series For ZOS
IBM MobileFirst Platform v7.0 Pot Intro v0.1
OPC PPT
Attack monitoring using ElasticSearch Logstash and Kibana
Ad

Similar to End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624 (20)

PDF
Deployment guide series ibm tivoli monitoring 6.1 sg247188
PDF
Ibm tivoli omegamon xe v3.1.0 deep dive on z os sg247155
PDF
Integrating ibm tivoli workload scheduler and content manager on demand to pr...
PDF
Integrating ibm tivoli workload scheduler and content manager on demand to pr...
PDF
Deployment guide series tivoli provisioning manager for os deployment v5.1 sg...
PDF
Deployment guide series ibm tivoli configuration manager sg246454
PDF
Large scale implementation of ibm tivoli composite application manager for we...
PDF
Deployment guide series ibm tivoli monitoring express version 6.1 sg247217
PDF
Deployment guide series ibm tivoli composite application manager for web sphe...
PDF
Ibm tivoli monitoring v5.1.1 implementation certification study guide redp3935
PDF
Ibm tivoli business service manager v4.1 redp4288
PDF
Tivoli and web sphere application server on z os sg247062
PDF
Certification study guide for ibm tivoli configuration manager 4.2 redp3946
PDF
Ibm tivoli monitoring v5.1.1 implementation certification study guide sg246780
PDF
Certification guide series ibm tivoli provisioning manager v5.1 sg247262
PDF
Developing workflows and automation packages for ibm tivoli intelligent orche...
PDF
Ibm tivoli monitoring implementation and performance optimization for large s...
PDF
BOOK - IBM zOS V1R10 communications server TCP / IP implementation volume 1 b...
PDF
Accounting and chargeback with tivoli decision support for os 390 sg246044
PDF
Implementing ibm tivoli omegamon xe for web sphere business integration v1.1 ...
Deployment guide series ibm tivoli monitoring 6.1 sg247188
Ibm tivoli omegamon xe v3.1.0 deep dive on z os sg247155
Integrating ibm tivoli workload scheduler and content manager on demand to pr...
Integrating ibm tivoli workload scheduler and content manager on demand to pr...
Deployment guide series tivoli provisioning manager for os deployment v5.1 sg...
Deployment guide series ibm tivoli configuration manager sg246454
Large scale implementation of ibm tivoli composite application manager for we...
Deployment guide series ibm tivoli monitoring express version 6.1 sg247217
Deployment guide series ibm tivoli composite application manager for web sphe...
Ibm tivoli monitoring v5.1.1 implementation certification study guide redp3935
Ibm tivoli business service manager v4.1 redp4288
Tivoli and web sphere application server on z os sg247062
Certification study guide for ibm tivoli configuration manager 4.2 redp3946
Ibm tivoli monitoring v5.1.1 implementation certification study guide sg246780
Certification guide series ibm tivoli provisioning manager v5.1 sg247262
Developing workflows and automation packages for ibm tivoli intelligent orche...
Ibm tivoli monitoring implementation and performance optimization for large s...
BOOK - IBM zOS V1R10 communications server TCP / IP implementation volume 1 b...
Accounting and chargeback with tivoli decision support for os 390 sg246044
Implementing ibm tivoli omegamon xe for web sphere business integration v1.1 ...

More from Banking at Ho Chi Minh city (20)

PDF
Postgresql v15.1
PDF
Postgresql v14.6 Document Guide
PDF
IBM MobileFirst Platform v7 Tech Overview
PDF
IBM MobileFirst Foundation Version Flyer v1.0
PDF
IBM MobileFirst Platform v7.0 POT Offers Lab v1.0
PDF
IBM MobileFirst Platform v7.0 pot intro v0.1
PDF
IBM MobileFirst Platform v7.0 POT App Mgmt Lab v1.1
PDF
IBM MobileFirst Platform v7.0 POT Analytics v1.1
PDF
IBM MobileFirst Platform Pot Sentiment Analysis v3
PDF
IBM MobileFirst Platform 7.0 POT InApp Feedback V0.1
PDF
Tme 10 cookbook for aix systems management and networking sg244867
PDF
Tivoli firewall magic redp0227
PDF
Tivoli data warehouse version 1.3 planning and implementation sg246343
PDF
Tivoli data warehouse 1.2 and business objects redp9116
PDF
Tivoli business systems manager v2.1 end to-end business impact management sg...
PDF
Tec implementation examples sg245216
PDF
Tape automation with ibm e server xseries servers redp0415
PDF
Tivoli storage productivity center v4.2 release guide sg247894
PDF
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
PDF
Storage migration and consolidation with ibm total storage products redp3888
Postgresql v15.1
Postgresql v14.6 Document Guide
IBM MobileFirst Platform v7 Tech Overview
IBM MobileFirst Foundation Version Flyer v1.0
IBM MobileFirst Platform v7.0 POT Offers Lab v1.0
IBM MobileFirst Platform v7.0 pot intro v0.1
IBM MobileFirst Platform v7.0 POT App Mgmt Lab v1.1
IBM MobileFirst Platform v7.0 POT Analytics v1.1
IBM MobileFirst Platform Pot Sentiment Analysis v3
IBM MobileFirst Platform 7.0 POT InApp Feedback V0.1
Tme 10 cookbook for aix systems management and networking sg244867
Tivoli firewall magic redp0227
Tivoli data warehouse version 1.3 planning and implementation sg246343
Tivoli data warehouse 1.2 and business objects redp9116
Tivoli business systems manager v2.1 end to-end business impact management sg...
Tec implementation examples sg245216
Tape automation with ibm e server xseries servers redp0415
Tivoli storage productivity center v4.2 release guide sg247894
Synchronizing data with ibm tivoli directory integrator 6.1 redp4317
Storage migration and consolidation with ibm total storage products redp3888

Recently uploaded (20)

PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
NewMind AI Weekly Chronicles - August'25 Week I
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PPT
Teaching material agriculture food technology
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
Cloud computing and distributed systems.
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Digital-Transformation-Roadmap-for-Companies.pptx
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Diabetes mellitus diagnosis method based random forest with bat algorithm
Review of recent advances in non-invasive hemoglobin estimation
NewMind AI Weekly Chronicles - August'25 Week I
The AUB Centre for AI in Media Proposal.docx
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Network Security Unit 5.pdf for BCA BBA.
Teaching material agriculture food technology
Programs and apps: productivity, graphics, security and other tools
Cloud computing and distributed systems.
Encapsulation_ Review paper, used for researhc scholars
MIND Revenue Release Quarter 2 2025 Press Release
Chapter 3 Spatial Domain Image Processing.pdf
Understanding_Digital_Forensics_Presentation.pptx

End to-end scheduling with ibm tivoli workload scheduler version 8.2 sg246624

  • 1. Front cover End-to-End Scheduling with IBM Tivoli Workload kload Scheduler V 8.2 Plan and implement your end-to-end scheduling environment Experiment with real-life scenarios Learn best practices and troubleshooting Vasfi Gucer Michael A. Lowry Finn Bastrup Knudsen ibm.com/redbooks
  • 3. International Technical Support Organization End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2 September 2004 SG24-6624-00
  • 4. Note: Before using this information and the product it supports, read the information in “Notices” on page ix. First Edition (September 2004) This edition applies to IBM Tivoli Workload Scheduler Version 8.2, IBM Tivoli Workload Scheduler for z/OS Version 8.2. © Copyright International Business Machines Corporation 2004. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
  • 5. Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Job scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Introduction to end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Introduction to Tivoli Workload Scheduler for z/OS. . . . . . . . . . . . . . . . . . . 4 1.3.1 Overview of Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . . . 4 1.3.2 Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 4 1.4 Introduction to Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4.1 Overview of IBM Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . 5 1.4.2 IBM Tivoli Workload Scheduler architecture . . . . . . . . . . . . . . . . . . . . 6 1.5 Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.6 Summary of enhancements in V8.2 related to end-to-end scheduling . . . . 8 1.6.1 New functions related with performance and scalability . . . . . . . . . . . 8 1.6.2 General enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.6.3 Security enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.7 The terminology used in this book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Chapter 2. End-to-end scheduling architecture . . . . . . . . . . . . . . . . . . . . . 25 2.1 IBM Tivoli Workload Scheduler for z/OS architecture . . . . . . . . . . . . . . . . 27 2.1.1 Tivoli Workload Scheduler for z/OS configuration. . . . . . . . . . . . . . . 28 2.1.2 Tivoli Workload Scheduler for z/OS database objects . . . . . . . . . . . 32 2.1.3 Tivoli Workload Scheduler for z/OS plans. . . . . . . . . . . . . . . . . . . . . 37 2.1.4 Other Tivoli Workload Scheduler for z/OS features . . . . . . . . . . . . . 44 2.2 Tivoli Workload Scheduler architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.2.1 The IBM Tivoli Workload Scheduler network . . . . . . . . . . . . . . . . . . 51 2.2.2 Tivoli Workload Scheduler workstation types . . . . . . . . . . . . . . . . . . 54 2.2.3 Tivoli Workload Scheduler topology . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.2.4 IBM Tivoli Workload Scheduler components . . . . . . . . . . . . . . . . . . 57 2.2.5 IBM Tivoli Workload Scheduler plan . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.3 End-to-end scheduling architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 © Copyright IBM Corp. 2004. All rights reserved. iii
  • 6. 2.3.1 How end-to-end scheduling works . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.3.2 Tivoli Workload Scheduler for z/OS end-to-end components . . . . . . 62 2.3.3 Tivoli Workload Scheduler for z/OS end-to-end configuration . . . . . 68 2.3.4 Tivoli Workload Scheduler for z/OS end-to-end plans . . . . . . . . . . . 75 2.3.5 Making the end-to-end scheduling system fault tolerant. . . . . . . . . . 84 2.3.6 Benefits of end-to-end scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . 86 2.4 Job Scheduling Console and related components . . . . . . . . . . . . . . . . . . 89 2.4.1 A brief introduction to the Tivoli Management Framework . . . . . . . . 90 2.4.2 Job Scheduling Services (JSS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 2.4.3 Connectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 2.5 Job log retrieval in an end-to-end environment . . . . . . . . . . . . . . . . . . . . . 98 2.5.1 Job log retrieval via the Tivoli Workload Scheduler connector . . . . . 98 2.5.2 Job log retrieval via the OPC connector . . . . . . . . . . . . . . . . . . . . . . 99 2.5.3 Job log retrieval when firewalls are involved. . . . . . . . . . . . . . . . . . 101 2.6 Tivoli Workload Scheduler, important files, and directory structure . . . . 103 2.7 conman commands in the end-to-end environment . . . . . . . . . . . . . . . . 106 Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3.1 Different ways to do end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . 111 3.2 The rationale behind end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . 112 3.3 Before you start the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 3.3.1 How to order the Tivoli Workload Scheduler software . . . . . . . . . . 114 3.3.2 Where to find more information for planning . . . . . . . . . . . . . . . . . . 116 3.4 Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS116 3.4.1 Tivoli Workload Scheduler for z/OS documentation . . . . . . . . . . . . 117 3.4.2 Service updates (PSP bucket, APARs, and PTFs) . . . . . . . . . . . . . 117 3.4.3 Tivoli Workload Scheduler for z/OS started tasks for end-to-end scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 3.4.4 Hierarchical File System (HFS) cluster . . . . . . . . . . . . . . . . . . . . . . 124 3.4.5 Data sets related to end-to-end scheduling . . . . . . . . . . . . . . . . . . 127 3.4.6 TCP/IP considerations for end-to-end server in sysplex . . . . . . . . . 129 3.4.7 Upgrading from Tivoli Workload Scheduler for z/OS 8.1 end-to-end scheduling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 3.5 Planning for end-to-end scheduling with Tivoli Workload Scheduler . . . 139 3.5.1 Tivoli Workload Scheduler publications and documentation. . . . . . 139 3.5.2 Tivoli Workload Scheduler service updates (fix packs) . . . . . . . . . . 140 3.5.3 System and software requirements. . . . . . . . . . . . . . . . . . . . . . . . . 140 3.5.4 Network planning and considerations . . . . . . . . . . . . . . . . . . . . . . . 141 3.5.5 Backup domain manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 3.5.6 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 3.5.7 Fault-tolerant agent (FTA) naming conventions . . . . . . . . . . . . . . . 146 3.6 Planning for the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . . . 149 iv End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 7. 3.6.1 Job Scheduling Console documentation. . . . . . . . . . . . . . . . . . . . . 150 3.6.2 Job Scheduling Console service (fix packs) . . . . . . . . . . . . . . . . . . 150 3.6.3 Compatibility and migration considerations for the JSC . . . . . . . . . 151 3.6.4 Planning for Job Scheduling Console availability . . . . . . . . . . . . . . 153 3.6.5 Planning for server started task for JSC communication . . . . . . . . 154 3.7 Planning for migration or upgrade from previous versions . . . . . . . . . . . 155 3.8 Planning for maintenance or upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 4.1 Before the installation is started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 4.2 Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling . . 159 4.2.1 Executing EQQJOBS installation aid . . . . . . . . . . . . . . . . . . . . . . . 162 4.2.2 Defining Tivoli Workload Scheduler for z/OS subsystems . . . . . . . 167 4.2.3 Allocate end-to-end data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 4.2.4 Create and customize the work directory . . . . . . . . . . . . . . . . . . . . 170 4.2.5 Create started task procedures for Tivoli Workload Scheduler for z/OS 173 4.2.6 Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 4.2.7 Initialization statements used to describe the topology. . . . . . . . . . 184 4.2.8 Example of DOMREC and CPUREC definitions. . . . . . . . . . . . . . . 197 4.2.9 The JTOPTS TWSJOBNAME() parameter . . . . . . . . . . . . . . . . . . . 200 4.2.10 Verify end-to-end installation in Tivoli Workload Scheduler for z/OS . 203 4.3 Installing Tivoli Workload Scheduler in an end-to-end environment . . . . 207 4.3.1 Installing multiple instances of Tivoli Workload Scheduler on one machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 4.3.2 Verify the Tivoli Workload Scheduler installation . . . . . . . . . . . . . . 211 4.4 Define, activate, verify fault-tolerant workstations . . . . . . . . . . . . . . . . . . 211 4.4.1 Define fault-tolerant workstation in Tivoli Workload Scheduler controller workstation database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 4.4.2 Activate the fault-tolerant workstation definition . . . . . . . . . . . . . . . 213 4.4.3 Verify that the fault-tolerant workstations are active and linked . . . 214 4.5 Creating fault-tolerant workstation job definitions and job streams . . . . . 217 4.5.1 Centralized and non-centralized scripts . . . . . . . . . . . . . . . . . . . . . 217 4.5.2 Definition of centralized scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 4.5.3 Definition of non-centralized scripts . . . . . . . . . . . . . . . . . . . . . . . . 221 4.5.4 Combination of centralized script and VARSUB, JOBREC parameters 232 4.5.5 Definition of FTW jobs and job streams in the controller. . . . . . . . . 234 4.6 Verification test of end-to-end scheduling . . . . . . . . . . . . . . . . . . . . . . . . 235 4.6.1 Verification of job with centralized script definitions . . . . . . . . . . . . 236 Contents v
  • 8. 4.6.2 Verification of job with non-centralized scripts . . . . . . . . . . . . . . . . 239 4.6.3 Verification of centralized script with JOBREC parameters . . . . . . 242 4.7 Activate support for the Tivoli Workload Scheduler Job Scheduling Console 245 4.7.1 Install and start Tivoli Workload Scheduler for z/OS JSC server . . 246 4.7.2 Installing and configuring Tivoli Management Framework 4.1 . . . . 252 4.7.3 Alternate method using Tivoli Management Framework 3.7.1 . . . . 253 4.7.4 Creating connector instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 4.7.5 Creating WTMF administrators for Tivoli Workload Scheduler . . . . 257 4.7.6 Installing the Job Scheduling Console . . . . . . . . . . . . . . . . . . . . . . 261 Chapter 5. End-to-end implementation scenarios and examples. . . . . . 265 5.1 Description of our environment and systems . . . . . . . . . . . . . . . . . . . . . 266 5.2 Creation of the Symphony file in detail . . . . . . . . . . . . . . . . . . . . . . . . . . 273 5.3 Migrating Tivoli OPC tracker agents to end-to-end scheduling . . . . . . . . 274 5.3.1 Migration benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 5.3.2 Migration planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 5.3.3 Migration checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 5.3.4 Migration actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 5.3.5 Migrating backward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 5.4 Conversion from Tivoli Workload Scheduler network to Tivoli Workload Scheduler for z/OS managed network . . . . . . . . . . . . . . . . . . . . . . . . . . 288 5.4.1 Illustration of the conversion process . . . . . . . . . . . . . . . . . . . . . . . 289 5.4.2 Considerations before doing the conversion. . . . . . . . . . . . . . . . . . 291 5.4.3 Conversion process from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 5.4.4 Some guidelines to automate the conversion process . . . . . . . . . . 299 5.5 Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios . . . . 303 5.5.1 Configure Tivoli Workload Scheduler for z/OS backup engines . . . 304 5.5.2 Configure DVIPA for Tivoli Workload Scheduler for z/OS end-to-end server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 5.5.3 Configure backup domain manager for first-level domain manager 306 5.5.4 Switch to Tivoli Workload Scheduler backup domain manager . . . 308 5.5.5 Implementing Tivoli Workload Scheduler high availability on high availability environments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 5.6 Backup and maintenance guidelines for FTAs . . . . . . . . . . . . . . . . . . . . 318 5.6.1 Backup of the Tivoli Workload Scheduler FTAs . . . . . . . . . . . . . . . 319 5.6.2 Stdlist files on Tivoli Workload Scheduler FTAs . . . . . . . . . . . . . . . 319 5.6.3 Auditing log files on Tivoli Workload Scheduler FTAs. . . . . . . . . . . 321 5.6.4 Monitoring file systems on Tivoli Workload Scheduler FTAs . . . . . 321 5.6.5 Central repositories for important Tivoli Workload Scheduler files . 322 5.7 Security on fault-tolerant agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 5.7.1 The security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 vi End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 9. 5.7.2 Sample security file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 5.8 End-to-end scheduling tips and tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 5.8.1 File dependencies in the end-to-end environment . . . . . . . . . . . . . 331 5.8.2 Handling offline or unlinked workstations . . . . . . . . . . . . . . . . . . . . 332 5.8.3 Using dummy jobs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 5.8.4 Placing job scripts in the same directories on FTAs . . . . . . . . . . . . 334 5.8.5 Common errors for jobs on fault-tolerant workstations . . . . . . . . . . 334 5.8.6 Problems with port numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 5.8.7 Cannot switch to new Symphony file (EQQPT52E) messages. . . . 340 Appendix A. Connector reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Setting the Tivoli environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Authorization roles required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Working with Tivoli Workload Scheduler for z/OS connector instances . . . . . 344 The wopcconn command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Working with Tivoli Workload Scheduler connector instances . . . . . . . . . . . . 346 The wtwsconn.sh command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Useful Tivoli Framework commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Contents vii
  • 10. viii End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 11. Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces. © Copyright IBM Corp. 2004. All rights reserved. ix
  • 12. Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX® NetView® ServicePac® AS/400® OS/390® Tivoli® HACMP™ OS/400® Tivoli Enterprise Console® IBM® RACF® TME® Language Environment® Redbooks™ VTAM® Maestro™ Redbooks (logo) ™ z/OS® MVS™ S/390® zSeries® The following terms are trademarks of other companies: Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel is a trademark of Intel Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of others. x End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 13. Preface The beginning of the new century sees the data center with a mix of work, hardware, and operating systems previously undreamed of. Today’s challenge is to manage disparate systems with minimal effort and maximum reliability. People experienced in scheduling traditional host-based batch work must now manage distributed systems, and those working in the distributed environment must take responsibility for work running on the corporate OS/390® system. This IBM® Redbook considers how best to provide end-to-end scheduling using IBM Tivoli® Workload Scheduler Version 8.2, both distributed (previously known as Maestro™) and mainframe (previously known as OPC) components. In this book, we provide the information for installing the necessary Tivoli Workload Scheduler software components and configuring them to communicate with each other. In addition to technical information, we consider various scenarios that may be encountered in the enterprise and suggest practical solutions. We describe how to manage work and dependencies across both environments using a single point of control. We believe that this redbook will be a valuable reference for IT specialists who implement end-to-end scheduling with Tivoli Workload Scheduler 8.2. The team that wrote this redbook This redbook was produced by a team of specialists from around the world working at the International Technical Support Organization, Austin Center. Vasfi Gucer is a Project Leader at the International Technical Support Organization, Austin Center. He worked for IBM Turkey for 10 years and has been with the ITSO since January 1999. He has more than 10 years of experience in the areas of systems management, and networking hardware and software on mainframe and distributed platforms. He has worked on various Tivoli customer projects as a Systems Architect in Turkey and the United States. Vasfi is also a IBM Certified Senior IT Specialist. Michael A. Lowry is an IBM Certified Consultant and Instructor currently working for IBM in Stockholm, Sweden. Michael does support, consulting, and training for IBM customers, primarily in Europe. He has 10 years of experience in the IT services business and has worked for IBM since 1996. Michael studied engineering and biology at the University of Texas in Austin, his hometown. © Copyright IBM Corp. 2004. All rights reserved. xi
  • 14. Before moving to Sweden, he worked in Austin for Apple, IBM, and the IBM Tivoli Workload Scheduler Support Team at Tivoli Systems. He has five years of experience with Tivoli Workload Scheduler and has extensive experience with IBM network and storage management products. He is also an IBM Certified AIX® Support Professional. Finn Bastrup Knudsen is an Advisory IT Specialist in Integrated Technology Services (ITS) in IBM Global Services in Copenhagen, Denmark. He has 12 years of experience working with IBM Tivoli Workload Scheduler for z/OS® (OPC) and four years of experience working with IBM Tivoli Workload Scheduler. Finn primarily does consultation and services at customer sites, as well as IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler training. He is a certified Tivoli Instructor in IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler. He has worked at IBM for 13 years. His areas of expertise include IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler. Also thanks to the following people for their contributions to this project: International Technical Support Organization, Austin Center Budi Darmawan and Betsy Thaggard IBM Italy Angelo D'ambrosio, Paolo Falsi, Antonio Gallotti, Pietro Iannucci, Valeria Perticara IBM USA Robert Haimowitz, Stephen Viola IBM Germany Stefan Franke Notice This publication is intended to help Tivoli specialists implement an end-to-end scheduling environment with IBM Tivoli Workload Scheduler 8.2. The information in this publication is not intended as the specification of any programming interfaces that are provided by Tivoli Workload Scheduler 8.2. See the PUBLICATIONS section of the IBM Programming Announcement for Tivoli Workload Scheduler 8.2 for more information about what publications are considered to be product documentation. xii End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 15. Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You will team with IBM technical professionals, Business Partners, and/or customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you will develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html Comments welcome Your comments are important to us. We want our Redbooks™ to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at: ibm.com/redbooks Send your comments in an e-mail to: redbook@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 905 Internal Zip 2834 11501 Burnet Road Austin, Texas 78758-3493 Preface xiii
  • 16. xiv End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 17. 1 Chapter 1. Introduction IBM Tivoli Workload Scheduler for z/OS Version 8.2 introduces many new features and further integrates the OPC-based and Maestro-based scheduling engines. In this chapter, we give a brief introduction to the IBM Tivoli Workload Scheduler 8.2 suite and summarize the functions that are introduced in Version 8.2: “Job scheduling” on page 2 “Introduction to end-to-end scheduling” on page 3 “Introduction to Tivoli Workload Scheduler for z/OS” on page 4 “Introduction to Tivoli Workload Scheduler” on page 5.2 “Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler” on page 7 “Summary of enhancements in V8.2 related to end-to-end scheduling” on page 8 “The terminology used in this book” on page 21 © Copyright IBM Corp. 2004 1
  • 18. 1.1 Job scheduling Scheduling is the nucleus of the data center. Orderly, reliable sequencing and management of process execution is an essential part of IT management. The IT environment consists of multiple strategic applications, such as SAP/3 and Oracle, payroll, invoicing, e-commerce, and order handling. These applications run on many different operating systems and platforms. Legacy systems must be maintained and integrated with newer systems. Workloads are increasing, accelerated by electronic commerce. Staffing and training requirements increase, and many platform experts are needed. There are too many consoles and no overall point of control. Constant (24x7) availability is essential and must be maintained through migrations, mergers, acquisitions, and consolidations. Dependencies exist between jobs in different environments. For example, a customer can use a Web browser to fill out an order form that triggers a UNIX® job that acknowledges the order, an AS/400® job that orders parts, a z/OS job that debits the customer’s bank account, and a Windows NT® job that prints an invoice and address label. Each job must run only after the job before it has completed. The IBM Tivoli Workload Scheduler Version 8.2 suite provides an integrated solution for running this kind of complicated workload. Its Job Scheduling Console provides a centralized point of control and unified interface for managing the workload regardless of the platform or operating system on which the jobs run. The Tivoli Workload Scheduler 8.2 suite includes IBM Tivoli Workload Scheduler, IBM Tivoli Workload Scheduler for z/OS, and the Job Scheduling Console. Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS can be used separately or together. End-to-end scheduling means using both products together, with an IBM mainframe acting as the scheduling controller for a network of other workstations. Because Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS have different histories and work on different platforms, someone who is familiar with one of the programs may not be familiar with the other. For this reason, we give a short introduction to each product separately and then proceed to discuss how the two programs work together. 2 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 19. 1.2 Introduction to end-to-end scheduling End-to-end scheduling means scheduling workload across all computing resources in your enterprise, from the mainframe in your data center, to the servers in your regional headquarters, all the way to the workstations in your local office. The Tivoli Workload Scheduler end-to-end scheduling solution is a system whereby scheduling throughout the network is defined, managed, controlled, and tracked from a single IBM mainframe or sysplex. End-to-end scheduling requires using two different programs: Tivoli Workload Scheduler for z/OS on the mainframe, and Tivoli Workload Scheduler on other operating systems (UNIX, Windows®, and OS/400®). This is shown in Figure 1-1. MASTERDM Tivoli Master Domain z/OS Workload Manager Scheduler OPCMASTER for z/OS DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB Tivoli Workload Scheduler FTA1 FTA2 FTA3 FTA4 Linux OS/400 Windows XP Solaris Figure 1-1 Both schedulers are required for end-to-end scheduling Despite the similar names, Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler are quite different and have distinct histories. IBM Tivoli Workload Scheduler for z/OS was originally called OPC. It was developed by IBM in the early days of the mainframe. IBM Tivoli Workload Scheduler was originally developed by a company called Unison Software. Unison was purchased by Tivoli, and Tivoli was then purchased by IBM. Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have slightly different ways of working, and programs have many features in common. IBM has continued development of both programs toward the goal of providing closer Chapter 1. Introduction 3
  • 20. and closer integration between them. The reason for this integration is simple: to facilitate an integrated scheduling system across all operating systems. It should be obvious that end-to-end scheduling depends on using the mainframe as the central point of control for the scheduling network. There are other ways to integrate scheduling between z/OS and other operating systems. We will discuss these in the following sections. 1.3 Introduction to Tivoli Workload Scheduler for z/OS IBM Tivoli Workload Scheduler for z/OS has been scheduling and controlling batch workloads in data centers since 1977. Originally called Operations Planning and Control (OPC), the product has been extensively developed and extended to meet the increasing demands of customers worldwide. An overnight workload consisting of 100,000 production jobs is not unusual, and Tivoli Workload Scheduler for z/OS can easily manage this kind of workload. 1.3.1 Overview of Tivoli Workload Scheduler for z/OS IBM Tivoli Workload Scheduler for z/OS databases contain all of the information about the work that is to be run, when it should run, and the resources that are needed and available. This information is used to calculate a forecast called the long-term plan. Data center staff can check this to confirm that the desired work is being scheduled when required. The long-term plan usually covers a time range of four to twelve weeks. The current plan is produced based on the long-term plan and the databases. The current plan usually covers 24 hours and is a detailed production schedule. Tivoli Workload Scheduler for z/OS uses the current plan to submit jobs to the appropriate processor at the appropriate time. All jobs in the current plan have Tivoli Workload Scheduler for z/OS status codes that indicate the progress of work. When a job’s predecessors are complete, Tivoli Workload Scheduler for z/OS considers it ready for submission. It verifies that all requested resources are available, and when these conditions are met, it causes the job to be submitted. 1.3.2 Tivoli Workload Scheduler for z/OS architecture IBM Tivoli Workload Scheduler for z/OS consists of a controller and one or more trackers. The controller, which runs on a z/OS system, manages the Tivoli Workload Scheduler for z/OS and the long term and current plans. The controller schedules work and causes jobs to be submitted to the appropriate system at the appropriate time. 4 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 21. Trackers are installed on every system managed by the controller. The tracker is the link between the controller and the managed system. The tracker submits jobs when the controller instructs it to do so, and it passes job start and job end information back to the controller. The controller can schedule jobs on z/OS system using trackers or on other operating systems using fault-tolerant agents (FTAs). FTAs can be run on many operating systems, including AIX, Linux®, Solaris, HP-UX, OS/400, and Windows. FTAs run IBM Tivoli Workload Scheduler, formerly called Maestro. The most common way of working with the controller is via ISPF panels. However, several other methods are available, including Program Interfaces, TSO commands, and the Job Scheduling Console. The Job Scheduling Console (JSC) is a Java™-based graphical user interface for controlling and monitoring workload on the mainframe and other platforms. The first version of JSC was released at the same time as Tivoli OPC Version 2.3. The current version of JSC (1.3) has been updated with several new functions specific to Tivoli Workload Scheduler for z/OS. JSC provides a common interface to both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler. For more information about IBM Tivoli Workload Scheduler for z/OS architecture, see Chapter 2, “End-to-end scheduling architecture” on page 25. 1.4 Introduction to Tivoli Workload Scheduler IBM Tivoli Workload Scheduler is descended from the Unison Maestro program. Unison Maestro was developed by Unison Software on the Hewlett-Packard MPE operating system. It was then ported to UNIX and Windows. In its various manifestations, Tivoli Workload Scheduler has a 17-year track record. During the processing day, Tivoli Workload Scheduler manages the production environment and automates most operator activities. It prepares jobs for execution, resolves interdependencies, and launches and tracks each job. Because jobs begin as soon as their dependencies are satisfied, idle time is minimized. Jobs never run out of sequence. If a job fails, IBM Tivoli Workload Scheduler can handle the recovery process with little or no operator intervention. 1.4.1 Overview of IBM Tivoli Workload Scheduler As with IBM Tivoli Workload Scheduler for z/OS, there are two basic aspects to job scheduling in IBM Tivoli Workload Scheduler: The database and the plan. The database contains all definitions for scheduling objects, such as jobs, job streams, resources, and workstations. It also holds statistics of job and job stream execution, as well as information on the user ID that created an object Chapter 1. Introduction 5
  • 22. and when an object was last modified. The plan contains all job scheduling activity planned for a period of one day. In IBM Tivoli Workload Scheduler, the plan is created every 24 hours and consists of all the jobs, job streams, and dependency objects that are scheduled to execute for that day. Job streams that do not complete successfully can be carried forward into the next day’s plan. 1.4.2 IBM Tivoli Workload Scheduler architecture A typical IBM Tivoli Workload Scheduler network consists of a master domain manager, domain managers, and fault-tolerant agents. The master domain manager, sometimes referred to as just the master, contains the centralized database files that store all defined scheduling objects. The master creates the plan, called Symphony, at the start of each day. Each domain manager is responsible for distribution of the plan to the fault-tolerant agents (FTAs) in its domain. A domain manager also handles resolution of dependencies between FTAs in its domain. FTAs are the workhorses of a Tivoli Workload Scheduler network. FTAs are where most jobs are run. As their name implies, fault-tolerant agents are fault tolerant. This means that in the event of a loss of communication with the domain manager, FTAs are capable of resolving local dependencies and launching their jobs without interruption. FTAs are capable of this because each FTA has its own copy of the plan. The plan contains a complete set of scheduling instructions for the production day. Similarly, a domain manager can resolve dependencies between FTAs in its domain even in the event of a loss of communication with the master, because the domain manager’s plan receives updates from all subordinate FTAs and contains the authoritative status of all jobs in that domain. The master domain manager is updated with the status of all jobs in the entire IBM Tivoli Workload Scheduler network. Logging and monitoring of the IBM Tivoli Workload Scheduler network is performed on the master. Starting with Tivoli Workload Scheduler Version 7.0, a new Java-based graphical user interface was made available to provide an easy-to-use interface to Tivoli Workload Scheduler. This new GUI is called Job Scheduling Console (JSC). The current version of JSC has been updated with several functions specific to Tivoli Workload Scheduler. The JSC provides a common interface to both Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS. For more about IBM Tivoli Workload Scheduler architecture, see Chapter 2, “End-to-end scheduling architecture” on page 25. 6 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 23. 1.5 Benefits of integrating Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler Both Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler have individual strengths. While an enterprise running mainframe and non-mainframe systems could schedule and control work using only one of these tools or using both tools separately, a complete solution requires that Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler work together. The Tivoli Workload Scheduler for z/OS long-term plan gives peace of mind by showing the workload forecast weeks or months into the future. Tivoli Workload Scheduler fault-tolerant agents go right on running jobs even if they lose communication with the domain manager. Tivoli Workload Scheduler for z/OS manages huge numbers of jobs through a sysplex of connected z/OS systems. Tivoli Workload Scheduler extended agents can control work on applications such as SAP R/3 and Oracle. Many data centers need to schedule significant amounts of both mainframe and non-mainframe jobs. It is often desirable to have a single point of control for scheduling on all systems in the enterprise, regardless of platform, operating system, or application. These businesses would probably benefit from implementing the end-to-end scheduling configuration. End-to-end scheduling enables the business to make the most of its computing resources. That said, the end-to-end scheduling configuration is not necessarily the best way to go for every enterprise. Some computing environments would probably benefit from keeping their mainframe and non-mainframe schedulers separate. Others would be better served by integrating the two schedulers in a different way (for example, z/OS [or MVS™] extended agents). Enterprises with a majority of jobs running on UNIX and Windows servers might not want to cede control of these jobs to the mainframe. Because the end-to-end solution involves software components on both mainframe and non-mainframe systems, there will have to be a high level of cooperation between your mainframe operators and your UNIX and Windows system administrators. Careful consideration of the requirements of end-to-end scheduling is necessary before going down this path. There are also several important decisions that must be made before beginning an implementation of end-to-end scheduling. For example, there is a trade-off between centralized control and fault tolerance. Careful planning now can save you time and trouble later. In Chapter 3, “Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2” on page 109, we explain in detail the decisions that must be made prior to implementation. We strongly recommend that you read this chapter in full before beginning any implementation. Chapter 1. Introduction 7
  • 24. 1.6 Summary of enhancements in V8.2 related to end-to-end scheduling Version 8.2 is the latest version of both IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. In this section we cover the new functions that affect end-to-end scheduling in three categories. 1.6.1 New functions related with performance and scalability Several features are now available with IBM Tivoli Workload Scheduler for z/OS 8.2 that directly or indirectly affect performance. Multiple first-level domain managers In IBM Tivoli Workload Scheduler for z/OS 8.1, there was a limitation of only one first-level domain manager (called the primary domain manager). In Version 8.2, you can have multiple first-level domain managers (that is, the level immediately below OPCMASTER). See Figure 1-2 on page 9. This allows greater flexibility and scalability and eliminates a potential performance bottleneck. It also allows greater freedom in defining your Tivoli Workload Scheduler distributed network. 8 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 25. OPCMASTER z/OS Master Domain Manager DomainZ DomainY AIX AIX Domain Domain Manager Manager DMZ DMY DomainA DomainB DomainC HPUX AIX HPUX Domain Domain Domain Manager Manager Manager DMA DMB DMC FTA1 FTA2 FTA3 FTA4 AIX Linux Windows 2000 Solaris Figure 1-2 IBM Tivoli Workload Scheduler network with two first-level domains Improved SCRIPTLIB parser The job definitions for non-centralized scripts are kept in members in the SCRPTLIB data set (EQQSCLIB DD statement). The definitions are specified in keywords and parameter definitions. See example below: Example 1-1 SCRPTLIB dataset BROWSE TWS.INST.SCRPTLIB(AIXJOB01) - 01.08 Line 00000000 Col 001 Command ===> Scroll ===> ********************************* Top of Data ***************************** /* Job to be executed on AIX machines */ VARSUB TABLES(FTWTABLE) PREFIX('&') VARFAIL(YES) TRUNCATE(NO) JOBREC JOBSCR('&TWSHOME./scripts/return_rc.sh 2') RCCONDSUCC('(RC=4) OR (RC=6)') RECOVERY OPTION(STOP) MESSAGE('Reply Yes when OK to continue') Chapter 1. Introduction 9
  • 26. ******************************** Bottom of Data *************************** The information in the SCRPTLIB member must be parsed every time a job is added to the Symphony file (both at Symphony creation or dynamically). In IBM Tivoli Workload Scheduler 8.1, the TSO parser was used, but this caused a major performance issue: up to 70% of the time that it took to create a Symphony file was spent parsing the SCRIPTLIB library members. In Version 8.2, a new parser has been implemented that significantly reduces the parsing time and consequently the Symphony file creation time. Check server status before Symphony file creation In an end-to-end configuration, daily planning batch jobs require that both the controller and server are active to be able to synchronize all the tasks and avoid unprocessed events being left in the event files. If the server is not active the daily planning batch process now fails at the beginning to avoid pointless extra processing. Two new log messages show the status of the end-to-end server: EQQ3120E END-TO-END SERVER NOT AVAILABLE EQQZ193I END-TO-END TRANSLATOR SERVER PROCESS IS NOW AVAILABLE Improved job log retrieval performance In IBM Tivoli Workload Scheduler 8.1, the thread structure of the Translator process implied that only usual incoming events were immediately notified to the controller; job log events were detected by the controller only when another event arrived or after a 30-second timeout. In IBM Tivoli Workload Scheduler 8.2, a new input-writer thread has been implemented that manages the writing of events to the input queue and takes input from both the input translator and the job log retriever. This enables the job log retriever to test whether there is room on the input queue and if not, it loops until enough space is available. Meanwhile the input translator can continue to write its smaller events to the queue. 1.6.2 General enhancements In this section, we cover enhancements in the general category. Centralized Script Library Management In order to ease the migration path from OPC tracker agents to IBM Tivoli Workload Scheduler Distributed Agents, a new function has been introduced in Tivoli Workload Scheduler 8.2 called Centralized Script Library Management (or Centralized Scripting). It is now possible to use the Tivoli Workload Scheduler for z/OS engine as the centralized repository for scripts of distributed jobs. 10 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 27. Centralized script is stored in the JOBLIB and it provides features that were on OPC tracker agents such as: JCL Editing Variable substitution and Job Setup Automatic Recovery Support for usage of the job-submit exit (EQQUX001) Note: Centralized script feature is not supported for fault tolerant jobs running on an AS/400 fault tolerant agent. Rules for defining centralized scripts To define a centralized script in the JOBLIB, the following rules must be considered: The lines that start with //* OPC, //*%OPC, and //*>OPC are used for the variable substitution and the automatic recovery. They are removed before the script is downloaded on the distributed agent. Each line starts from column 1 to column 80. Backslash () at column 80 is the character of continuation. Blanks at the end of the line are automatically removed. These rules guarantee the compatibility with the old tracker agent jobs. Note: The SCRIPTLIB follows the TSO rules, so the rules to define a centralized script in the JOBLIB differ from those to define the JOBSCR and JOBCMD of a non-centralized script. For more details, refer to 4.5.2, “Definition of centralized scripts” on page 219. A new data set, EQQTWSCS, has been introduced with this new release to facilitate centralized scripting. EQQTWSCS is a PDSE data set used to temporarily store a script when it is downloaded from the JOBLIB data set to the agent for its submission. User interface changes for the centralized script Centralized Scripting required changes to several Tivoli Workload Scheduler for z/OS interfaces such as ISPF, Job Scheduling Console, and a number of batch interfaces. In this section, we cover the changes to the user interfaces ISPF and Job Scheduling Console. In ISPF, a new job option has been added to specify whether an operation that runs on a fault tolerant workstation has a centralized script. It can value Y/N: Y if the job has the script stored centrally in the JOBLIB. Chapter 1. Introduction 11
  • 28. N if the script is stored locally and the job has the job definition in the SCRIPTLIB. In a database, the value of this new job option can be modified during the add/modify of an application or operation. It can be set for every operation, without workstation checking. When a new operation is created, the default value for this option is N. For non-FTW (Fault Tolerant Workstation) operations, the value of the option is automatically changed to Y during Daily Plan or when exiting the Modify an occurrence or Create an occurrence dialog. The new Centralized Script option was added for operations in the Application Description database and is always editable (Figure 1-3). Figure 1-3 CENTRALIZED SCRIPT option in the AD dialog The Centralized Script option also has been added for operations in the current plan. It is editable only when adding a new operation. It can be browsed when modifying an operation (Figure 1-4 on page 13). 12 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 29. Figure 1-4 CENTRALIZED SCRIPT option in the CP dialog Similarly, Centralized Script has been added in the Job Scheduling Console dialog for creating an FTW task, as shown in Figure 1-5. Figure 1-5 Centralized Script option in the JSC dialog Chapter 1. Introduction 13
  • 30. Considerations when using centralized scripts Using centralized scripts can ease the migration path from OPC tracker agents to FTAs. It is also easier to maintain the centralized scripts because they are kept in a central location, but these benefits come with some limitations. When deciding whether to store the script locally or centrally, take into consideration that: The script must be downloaded every time a job runs. There is no caching mechanism on the FTA. The script is discarded as soon as the job completes. A rerun of a centralized job causes the script to be downloaded again. There is a reduction in the fault tolerance, because the centralized dependency can be released only by the controller. Recovery for non-centralized jobs In Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the job definition to specify recovery options and actions. Recovery is performed automatically on the FTA in case of an abend. By this feature, it is now possible to use the recovery for jobs running in a end-to-end network as implemented in IBM Tivoli Workload Scheduler distributed. Defining recovery for non-centralized jobs To activate the recovery for a non-centralized job, you have to specify the RECOVERY statement in the job member in the scriptlib. It is possible to specify one or both of the following recovery actions: A recovery job (JOBCMD or JOBSCR keywords) A recovery prompt (MESSAGE keyword) The recovery actions must be followed by one of the recovery options (the OPTION keyword), stop, continue, or rerun. The default is stop with no recovery job and no recovery prompt. Figure 1-6 on page 15 shows the syntax of the RECOVERY statement. 14 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 31. Figure 1-6 Syntax of the RECOVERY statement The keywords JOBUSR, JOBWS, INTRACTV, and RCCONDSUC can be used only if you have defined a recovery job using the JOBSCR or JOBCMD keyword. You cannot use the recovery prompt if you specify the recovery STOP option without using a recovery job. Having the OPTION(RERUN) and no recovery prompt specified could cause a loop. To prevent this situation, after a failed rerun of the job, a recovery prompt message is shown automatically. Note: The RECOVERY statement is ignored if it is used with a job that runs a centralized script. For more details, refer to 4.5.3, “Definition of non-centralized scripts” on page 221. Recovery actions available The following table describes the recovery actions that can be taken against a job that ended in error (and not failed). Note that JobP is the principal job, while JobR is the recovery job. Table 1-1 The recovery actions taken against a job ended in error ACTION/OPTION Stop Continue Rerun No recovery JobP remains in error. JobP is completed. Rerun JobP. prompt/No recovery job A recovery Issue the prompt. JobP Issue recovery prompt. If Issue the prompt. If 'no' prompt/No remains in error. “yes” reply, JobP is reply, JobP remains in recovery job completed. If 'no' reply, error. If “yes” reply, rerun JobP remains in error. JobP. Chapter 1. Introduction 15
  • 32. ACTION/OPTION Stop Continue Rerun No recovery Launch JobR. Launch JobR. JobP is Launch JobR. prompt/A recovery If it is successful, JobP completed. If it is successful, rerun job is completed; otherwise JobP; otherwise JobP JobP remains in error. remains in error. A recovery Issue the prompt. If 'no' Issue the prompt. Issue the prompt. If 'no' prompt/A recovery reply, JobP remains in If 'no' reply, JobP remains reply, JobP remains in job error. If “yes” reply: in error. error. If “yes” reply: Launch JobR. If “yes” reply: Launch JobR. If it is successful, Launch JobR. If it is successful, JobP is completed; JobP is completed. rerun JobP; otherwise otherwise JobP JobP remains in error. remains in error. Job Instance Recovery Information panels Figure 1-7 shows the Job Scheduling Console Job Instance Recovery Information panel. You can browse the job log of the recovery job, and you can reply prompt. Note the fields in the Job Scheduling Console panel and JOBREC parameters mapping. Figure 1-7 JSC and JOBREC parameters mapping 16 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 33. Also note that you can access the same information from the ISPF panels. From the Operation list in MCP (5.3), if the operation is abended and the RECOVERY statement has been used, you can use the row command RI (Recovery Information) to display the new panel EQQRINP as shown in Figure 1-8. Figure 1-8 EQQRINP ISPF panel Variable substitution for non-centralized jobs In Tivoli Workload Scheduler 8.2, a new simple syntax has been added in the job definition to specify Variable Substitution Directives. This provides the capability to use the variable substitution for jobs running in an end-to-end network without using the centralized script solution. Tivoli Workload Scheduler for z/OS–supplied variables and user-defined variables (defined using a table) are supported in this new function. Variables are substituted when a job is added to Symphony (that is, when the Daily Planning creates the Symphony or the job is added to the plan using the MCP dialog). To activate the variable substitution, use the VARSUB statement. The syntax of the VARSUB statement is given in Figure 1-9 on page 18. Note that it must be the first one in the SCRPTLIB member containing the job definition. The VARSUB statement enables you to specify variables when you set a statement keyword in the job definition. Chapter 1. Introduction 17
  • 34. Figure 1-9 Syntax of the VARSUB statement Use the TABLES keyword to identify the variable tables that must be searched and the search order. In particular: APPL indicates the application variable table specified in the VARIABLE TABLE field on the MCP panel, at Occurrence level. GLOBAL indicates the table defined in the GTABLE keyword of the OPCOPTS controller and BATCHOPT batch options. Any non-alphanumeric character, except blanks, can be used as a symbol to indicate that the characters that follow represent a variable. You can define two kinds of symbols using the PREFIX or BACKPREF keywords in the VARSUB statement; it allows you to define simple and compound variables. For more details, refer to 4.5.3, “Definition of non-centralized scripts” on page 221, and “Job Tailoring” in IBM Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263. Return code mapping In Tivoli Workload Scheduler 8.1, if a fault tolerant job ends with a return code greater then 0 it is considered as abended. It should be possible to define whether a job is successful or abended according to a “success condition” defined at job level. This would supply the NOERROR functionality, supported only for host jobs. In Tivoli Workload Scheduler 8.2 for z/OS, a new keyword (RCCONDSUC) has been added in the job definition to specify the success condition. Tivoli Workload Scheduler 8.2 for z/OS interfaces show the operations return code. Customize the JOBREC and the RECOVERY statements in the SCRIPTLIB to specify a success condition for the job adding the RCCONDSUC keyword. The success condition expression can contain a combination of comparison and Boolean expressions. 18 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 35. Comparison expression Comparison expression specifies the job return codes. The syntax is: (RC operator operand)- RC The RC keyword. Operand An integer between -2147483647 and 2147483647. Operator Comparison operator Table 1-2 lists the values it can have. Table 1-2 Operator Comparison operator values Example Operator Description RC < a < Less than RC <= a <= Less than or equal to RC> a > Greater than RC >= a >= Greater than or equal to RC = a = Equal to RC <> a <> Not equal to Note: Unlike IBM Tivoli Workload Scheduler distributed, the != operator is not supported to specify a ‘not equal to’ condition. The successful RC is specified by a logical combination of comparison expressions. The syntax is: comparison_expression operator comparison_expression. For example, you can define a successful job as a job that ends with a return code less than 3 or equal to 5 as follows: RCCONDSUC(“(RC<3) OR (RC=5)“) Note: If you do not specify the RCCONDSUC, only a return code equal to zero corresponds to a successful condition. Late job handling In IBM Tivoli Workload Scheduler 8.2 distributed, a user can define a DEADLINE time for a job or a job stream. If the job never started or if it is still executing after the deadline time has passed, Tivoli Workload Scheduler informs the user about the missed deadline. Chapter 1. Introduction 19
  • 36. IBM Tivoli Workload Scheduler for z/OS 8.2 now supports this function. In Version 8.2, the user can specify and modify a deadline time for a job or a job stream. If the job is running on a fault-tolerant agent, the deadline time is also stored in the Symphony file, and it is managed locally by the FTA. In an end-to-end network, the deadline is always defined for operations and occurrences. Batchman process on USS does not check the deadline to improve performances. 1.6.3 Security enhancements This new version includes a number of security enhancements, which are discussed in this section. Firewall support in an end-to-end environment For previous versions of Tivoli Workload Scheduler for z/OS, running the commands to start or stop a workstation or to get the standard list requires opening a direct TCP/IP connection between the originator and the destination nodes. In a firewall environment, this forces users to break the firewall to open a direct communication path between the Tivoli Workload Scheduler for z/OS master and each fault-tolerant agent in the network. In this version, it is now possible to enable the firewall support of Tivoli Workload Scheduler in an end-to-end environment. If a firewall exists between a workstation and its domain manager, in order to force the start, stop, and get job output commands to go through the domain’s hierarchy, it is necessary to set the FIREWALL option to YES in the CPUREC statement. Example 1-2 shows a CPUREC definition that enables the firewall support. Example 1-2 CPUREC definition with firewall support enabled CPUREC CPUNAME(TWAD) CPUOS(WNT) CPUNODE(jsgui) CPUDOMAIN(maindom) CPUTYPE(FTA) FIREWALL(Y) SSL support It is now possible to enable the strong authentication and encryption (SSL) support of IBM Tivoli Workload Scheduler in an end-to-end environment. You can enable the Tivoli Workload Scheduler processes that run as USS (UNIX System Services) processes in the Tivoli Workload Scheduler for z/OS address 20 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 37. space to establish SSL authentication between a Tivoli Workload Scheduler for z/OS master and the underlying IBM Tivoli Workload Scheduler domain managers. The authentication mechanism of IBM Tivoli Workload Scheduler is based on the OpenSSL toolkit, while IBM Tivoli Workload Scheduler for z/OS uses the System SSL services of z/OS. To enable SSL authentication for your end-to-end network, you must perform the following actions: 1. Create as many private keys, certificates, and trusted certification authority (CA) chains as you plan to use in your network. Refer to the OS/390 V2R10.0 System SSL Programming Guide and Reference, SC23-3978, for further details about the SSL protocol. 2. Customize the localopts file on IBM Tivoli Workload Scheduler workstations. To find how to enable SSL in the IBM Tivoli Workload Scheduler domain managers, refer to IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264. 3. Configure IBM Tivoli Workload Scheduler for z/OS: – Customize localopts file on USS workdir. – Customize the TOPOLOGY statement for the OPCMASTER. – Customize CPUREC statements for every workstation in the net. Refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265, for the SSL support in the Tivoli Workload Scheduler for z/OS. 1.7 The terminology used in this book The IBM Tivoli Workload Scheduler 8.2 suite comprises two somewhat different software programs, each with its own history and terminology. For this reason, there are sometimes two different and interchangeable names for the same thing. Other times, a term used in one context can have a different meaning in another context. To help clear up this confusion, we now introduce some of the terms and acronyms that will be used throughout the book. In order to make the terminology used in this book internally consistent, we adopted a system of terminology that may be a bit different than that used in the product documentation. So take a moment to read through this list, even if you are already familiar with the products. IBM Tivoli Workload Scheduler 8.2 suite Chapter 1. Introduction 21
  • 38. The suite of programs that includes IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. These programs are used together to make end-to-end scheduling work. Sometimes called just IBM Tivoli Workload Scheduler. IBM Tivoli Workload Scheduler This is the version of IBM Tivoli Workload Scheduler that runs on UNIX, OS/400, and Windows operating systems, as distinguished from IBM Tivoli Workload Scheduler for z/OS, a somewhat different program. Sometimes called IBM Tivoli Workload Scheduler Distributed. IBM Tivoli Workload Scheduler is based on the old Maestro program. IBM Tivoli Workload Scheduler for z/OS This is the version of IBM Tivoli Workload Scheduler that runs on z/OS, as distinguished from IBM Tivoli Workload Scheduler (by itself, without the for z/OS specification). IBM Tivoli Workload Scheduler for z/OS is based on the old OPC program. Master The top level of the IBM Tivoli Workload Scheduler or IBM Tivoli Workload Scheduler for z/OS scheduling network. Also called the master domain manager, because it is the domain manager of the MASTERDM (top-level) domain. Domain manager The agent responsible for handling dependency resolution for subordinate agents. Essentially an FTA with a few extra responsibilities. Fault-tolerant agent An agent that keeps its own local copy of the plan file and can continue operation even if the connection to the parent domain manager is lost. Also called an FTA. In IBM Tivoli Workload Scheduler for z/OS, FTAs are referred to as fault tolerant workstations. Scheduling engine An IBM Tivoli Workload Scheduler engine or IBM Tivoli Workload Scheduler for z/OS engine. IBM Tivoli Workload Scheduler engine The part of IBM Tivoli Workload Scheduler that does actual scheduling work, as distinguished from the other components that are related primarily to the user interface (for example, the IBM Tivoli Workload Scheduler connector). Essentially the part of IBM Tivoli Workload Scheduler that is descended from the old Maestro program. 22 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 39. IBM Tivoli Workload Scheduler for z/OS engine The part of IBM Tivoli Workload Scheduler for z/OS that does actual scheduling work, as distinguished from the other components that are related primarily to the user interface (for example, the IBM Tivoli Workload Scheduler for z/OS connector). Essentially the controller plus the server. IBM Tivoli Workload Scheduler for z/OS controller The part of the IBM Tivoli Workload Scheduler for z/OS engine that is based on the old OPC program. IBM Tivoli Workload Scheduler for z/OS server The part of IBM Tivoli Workload Scheduler for z/OS that is based on the UNIX IBM Tivoli Workload Scheduler code. Runs in UNIX System Services (USS) on the mainframe. JSC Job Scheduling Console. This is the common graphical user interface (GUI) to both the IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS scheduling engines. Connector A small program that provides an interface between the common GUI (Job Scheduling Console) and one or more scheduling engines. The connector translates to and from the different “languages” used by the different scheduling engines. JSS Job Scheduling Services. Essentially a library that is used by the connectors. TMF Tivoli Management Framework. Also called just the Framework. Chapter 1. Introduction 23
  • 40. 24 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 41. 2 Chapter 2. End-to-end scheduling architecture End-to-end scheduling involves running programs on multiple platforms. For this reason, it is important to understand how the different components work together. Taking the time to get acquainted with end-to-end scheduling architecture will make it easier for you to install, use, and troubleshoot your end-to-end scheduling system. In this chapter, the following topics are discussed: “IBM Tivoli Workload Scheduler for z/OS architecture” on page 27 “Tivoli Workload Scheduler architecture” on page 50 “End-to-end scheduling architecture” on page 59 “Job Scheduling Console and related components” on page 89 If you are unfamiliar with IBM Tivoli Workload Scheduler for z/OS, you can start with the section about its architecture to get a better understanding of how it works. If you are already familiar with Tivoli Workload Scheduler for z/OS but would like to learn more about IBM Tivoli Workload Scheduler (for other platforms such as UNIX, Windows, or OS/400), you can skip to that section. © Copyright IBM Corp. 2004 25
  • 42. If you are already familiar with both IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS, skip ahead to the third section, in which we describe how both programs work together when configured as an end-to-end network. The Job Scheduling Console, its components, and its architecture, are described in the last topic. In this topic, we describe the different components that are used to establish a Job Scheduling Console environment. 26 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 43. 2.1 IBM Tivoli Workload Scheduler for z/OS architecture IBM Tivoli Workload Scheduler for z/OS expands the scope for automating your data processing operations. It plans and automatically schedules the production workload. From a single point of control, it drives and controls the workload processing at both local and remote sites. By using IBM Tivoli Workload Scheduler for z/OS to increase automation, you use your data processing resources more efficiently, have more control over your data processing assets, and manage your production workload processing better. IBM Tivoli Workload Scheduler for z/OS is composed of three major features: The IBM Tivoli Workload Scheduler for z/OS agent feature The agent is the base product in IBM Tivoli Workload Scheduler for z/OS. The agent is also called a tracker. It must run on every operating system in your z/OS complex on which IBM Tivoli Workload Scheduler for z/OS controlled work runs. The agent records details of job starts and passes that information to the engine, which updates the plan with statuses. The IBM Tivoli Workload Scheduler for z/OS engine feature One z/OS operating system in your complex is designated the controlling system and it runs the engine. The engine is also called the controller. Only one engine feature is required, even when you want to establish standby engines on other z/OS systems in a sysplex. The engine manages the databases and the plans and causes the work to be submitted at the appropriate time and at the appropriate system in your z/OS sysplex or on another system in a connected z/OS sysplex or z/OS system. The IBM Tivoli Workload Scheduler for z/OS end-to-end feature This feature makes it possible for the IBM Tivoli Workload Scheduler for z/OS engine to manage a production workload in a Tivoli Workload Scheduler distributed environment. You can schedule, control, and monitor jobs in Tivoli Workload Scheduler from the Tivoli Workload Scheduler for z/OS engine with this feature. The end-to-end feature is covered in 2.3, “End-to-end scheduling architecture” on page 59. The workload on other operating environments can also be controlled with the open interfaces that are provided with Tivoli Workload Scheduler for z/OS. Sample programs using TCP/IP or a Network Job Entry/Remote Spooling Communication Subsystem (NJE/RSCS) combination show you how you can control the workload on environments that at present have no scheduling feature. Chapter 2. End-to-end scheduling architecture 27
  • 44. In addition to these major parts, the IBM Tivoli Workload Scheduler for z/OS product also contains the IBM Tivoli Workload Scheduler for z/OS connector and the Job Scheduling Console (JSC). IBM Tivoli Workload Scheduler for z/OS connector Maps the Job Scheduling Console commands to the IBM Tivoli Workload Scheduler for z/OS engine. The Tivoli Workload Scheduler for z/OS connector requires that the Tivoli Management Framework be configured for a Tivoli server or Tivoli managed node. Job Scheduling Console A Java-based graphical user interface (GUI) for the IBM Tivoli Workload Scheduler suite. The Job Scheduling Console runs on any machine from which you want to manage Tivoli Workload Scheduler for z/OS engine plan and database objects. It provides, through the IBM Tivoli Workload Scheduler for z/OS connector, functionality similar to the IBM Tivoli Workload Scheduler for z/OS legacy ISPF interface. You can use the Job Scheduling Console from any machine as long as it has a TCP/IP link with the machine running the IBM Tivoli Workload Scheduler for z/OS connector. The same Job Scheduling Console can be used for Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS. In the next topics, we provide an overview of IBM Tivoli Workload Scheduler for z/OS configuration, the architecture, and the terminology used in Tivoli Workload Scheduler for z/OS. 2.1.1 Tivoli Workload Scheduler for z/OS configuration IBM Tivoli Workload Scheduler for z/OS supports many configuration options using a variety of communication methods: The controlling system (the controller or engine) Controlled z/OS systems Remote panels and program interface applications Job Scheduling Console Scheduling jobs that are in a distributed environment using Tivoli Workload Scheduler (described in 2.3, “End-to-end scheduling architecture” on page 59) 28 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 45. The controlling system The controlling system requires both the agent and the engine. One controlling system can manage the production workload across all of your operating environments. The engine is the focal point of control and information. It contains the controlling functions, the dialogs, the databases, the plans, and the scheduler’s own batch programs for housekeeping and so forth. Only one engine is required to control the entire installation, including local and remote systems. Because IBM Tivoli Workload Scheduler for z/OS provides a single point of control for your production workload, it is important to make this system redundant. This minimizes the risk of having any outages in your production workload in case the engine or the system with the engine fails. To make the engine redundant, one can start backup engines (hot standby engines) on other systems in the same sysplex as the active engine. If the active engine or the controlling system fails, Tivoli Workload Scheduler for z/OS can automatically transfer the controlling functions to a backup system within a Parallel Sysplex. Through Cross Coupling Facility (XCF), IBM Tivoli Workload Scheduler for z/OS can automatically maintain production workload processing during system failures. The standby engine can be started on several z/OS systems in the sysplex. Figure 2-1 on page 30 shows an active engine with two standby engines running in one sysplex. When an engine is started on a system in the sysplex, it will check whether there is already an active engine in the sysplex. It there are no active engines, it will be an active engine. If there is an active engine, it will be a standby engine. The engine in Figure 2-1 on page 30 has connections to eight agents: three in the sysplex, two remote, and three in another sysplex. The agents on the remote systems and in the other sysplexes are connected to the active engine via ACF/VTAM® connections. Chapter 2. End-to-end scheduling architecture 29
  • 46. Agent Agent Standby Standby Engine Engine z/OS SYSPLEX Agent Active Engine Remote VTAM VTAM Remote Agent Agent Remote Remote Agent Agent z/OS SYSPLEX Remote Agent Figure 2-1 Two sysplex environments and stand-alone systems Controlled z/OS systems An agent is required for every controlled z/OS system in a configuration. This includes, for example, locally controlled systems within shared DASD or sysplex configurations. The agent runs as a z/OS subsystem and interfaces with the operating system through JES2 (Job Execution Subsystem) or JES3, and SMF (System Management Facility), using the subsystem interface and the operating system exits. The agent monitors and logs the status of work, and passes the status information to the engine via shared DASD, XCF, or ACF/VTAM. You can exploit z/OS and the cross-system coupling facility (XCF) to connect your local z/OS systems. Rather than being passed to the controlling system via shared DASD, work status information is passed directly via XCF connections. XCF enables you to exploit all production-workload-restart facilities and its hot standby function in Tivoli Workload Scheduler for z/OS. Remote systems The agent on a remote z/OS system passes status information about the production work in progress to the engine on the controlling system. All communication between Tivoli Workload Scheduler for z/OS subsystems on the controlling and remote systems is done via ACF/VTAM. 30 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 47. Tivoli Workload Scheduler for z/OS enables you to link remote systems using ACF/VTAM networks. Remote systems are frequently used locally (on premises) to reduce the complexity of the data processing installation. Remote panels and program interface applications ISPF panels and program interface (PIF) applications can run in a different z/OS system than the one where the active engine is running. Dialogs and PIF applications send requests to and receive data from a Tivoli Workload Scheduler for z/OS server that is running on the same z/OS system as the target engine, via advanced program-to-program communications (APPC). The APPC server communicates with the active engine to perform the requested actions. Using an APPC server for ISPF panels and PIF gives the user the freedom to run ISPF panels and PIF on any system in a z/OS enterprise, as long as this system has advanced program-to-program communication with the system where the active engine is started. This also means that you do not have to make sure that your PIF jobs always run on the z/OS system where the active engine is started. Furthermore, using the APPC server makes it seamless for panel users and PIF programs if the engine is moved to its backup engine. The APPC server is a separate address space, started and stopped either automatically by the engine, or by the user via the z/OS start command. There can be more than one server for an engine. If the dialogs or the PIF applications run on the same z/OS system as the target engine, the server may not be involved. As shown in Figure 2-2 on page 32, it is possible to run the IBM Tivoli Workload Scheduler for z/OS dialogs and PIF applications from any system as long as the system has an ACF/VTAM connection to the APPC server. Chapter 2. End-to-end scheduling architecture 31
  • 48. PIF program ISPF z/OS panels SYSPLEX Active Engine APPC Server VTAM VTAM Remote Remote System System Remote System ISPF ISPF panels panels PIF program Figure 2-2 APPC server with remote panels and PIF access to ITWS for z/OS Note: Job Scheduling Console is the GUI to both IBM Tivoli Workload Scheduler for z/OS and IBM Tivoli Workload Scheduler. JSC is discussed in 2.4, “Job Scheduling Console and related components” on page 89. 2.1.2 Tivoli Workload Scheduler for z/OS database objects Scheduling with IBM Tivoli Workload Scheduler for z/OS includes the capability to do the following: Schedule jobs across multiple systems local and remotely. Group jobs into job streams according to, for example, function or application, and define advanced run cycles based on customized calendars for the job streams. Set workload priorities and specify times for the submission of particular work. Base submission of work on availability of resources. Tailor jobs automatically based on dates, date calculations, and so forth. Ensure correct processing order by identifying dependencies such as successful completion of previous jobs, availability of resources, and time of day. 32 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 49. Define automatic recovery and restart for jobs. Forward incomplete jobs to the next production day. This is accomplished by defining scheduling objects in the Tivoli Workload Scheduler for z/OS databases that are managed by the active engine and shared by the standby engines. Scheduling objects are combined in these databases so that they represent the workload that you want to have handled by Tivoli Workload Scheduler for z/OS. Tivoli Workload Scheduler for z/OS databases contain information about the work that is to be run, when it should be run, and the resources that are needed and available. This information is used to calculate a forward forecast called the long-term plan. Scheduling objects are elements that are used to define your Tivoli Workload Scheduler for z/OS workload. Scheduling objects include job streams (jobs and dependencies as part of job streams), workstations, calendars, periods, operator instructions, resources, and JCL variables. All of these scheduling objects can be created, modified, or deleted by using the legacy IBM Tivoli Workload Scheduler for z/OS ISPF panels. Job streams, workstations, and resources can be managed from the Job Scheduling Console as well. Job streams A job stream (also known as an application in the legacy OPC ISPF interface) is a description of a unit of production work. It includes a list of jobs (related tasks) that are associated with that unit of work. For example, a payroll job stream might include a manual task in which an operator prepares a job; several computer-processing tasks in which programs are run to read a database, update employee records, and write payroll information to an output file; and a print task that prints paychecks. IBM Tivoli Workload Scheduler for z/OS schedules work based on the information that you provide in your job stream description. A job stream can include the following: A list of the jobs (related tasks) that are associated with that unit of work, such as: – Data entry – Job preparation – Job submission or started-task initiation – Communication with the NetView® program – File transfer to other operating environments Chapter 2. End-to-end scheduling architecture 33
  • 50. – Printing of output – Post-processing activities, such as quality control or dispatch – Other tasks related to the unit of work that you want to schedule, control, and track A description of dependencies between jobs within a job stream and between jobs in other job streams Information about resource requirements, such as exclusive use of a data set Special operator instructions that are associated with a job How, when, and where each job should be processed Run policies for that unit of work; that is, when it should be scheduled or, alternatively, the name of a group definition that records the run policy Workstations When scheduling and processing work, Tivoli Workload Scheduler for z/OS considers the processing requirements of each job. Some typical processing considerations are: What human or machine resources are required for processing the work (for example, operators, processors, or printers)? When are these resources available? How will these jobs be tracked? Can this work be processed somewhere else if the resources become unavailable? You can plan for maintenance windows in your hardware and software environments. Tivoli Workload Scheduler for z/OS enables you to perform a controlled and incident-free shutdown of the environment, preventing last-minute cancellation of active tasks. You can choose to reroute the workload automatically during any outage, planned or unplanned. Tivoli Workload Scheduler for z/OS tracks jobs as they are processed at workstations and dynamically updates the plan with real-time information on the status of jobs. You can view or modify this status information online using the workstation ready lists in the dialog. Dependencies In general, every data-processing-related activity must occur in a specific order. Activities performed out of order will, at the very least, create invalid output; in the worst case your 34 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 51. corporate data will be corrupted. In any case, the result is costly reruns, missed deadlines, and unsatisfied customers. You can define dependencies for jobs when a specific processing order is required. When IBM Tivoli Workload Scheduler for z/OS manages the dependent relationships, the jobs are started in the correct order every time they are scheduled. A dependency is called internal when it is between two jobs in the same job stream, and external when it is between two jobs in different job streams. You can work with job dependencies graphically from the Job Scheduling Console (Figure 2-3). Figure 2-3 Job Scheduling Console display of dependencies between jobs Calendars Tivoli Workload Scheduler for z/OS uses information about when the jobs departments work and when they are free, so job streams are not scheduled to run on days when processing resources are not available (such as Sundays and holidays). This information is stored in a calendar. Tivoli Workload Scheduler for z/OS supports multiple calendars for enterprises where different departments have different work days and free Chapter 2. End-to-end scheduling architecture 35
  • 52. days (different groups within a business operate according to different calendars). The multiple calendar function is critical if your enterprise has installations in more than one geographical location (for example, with different local or national holidays). Resources Tivoli Workload Scheduler for z/OS enables you to serialize work based on the status of any data processing resource. A typical example is a job that uses a data set as input but must not start until the data set is successfully created and loaded with valid data. You can use resource serialization support to send availability information about a data processing resource to the workload in Tivoli Workload Scheduler for z/OS. To accomplish this, Tivoli Workload Scheduler for z/OS uses resources (also called special resources). Resources are typically defined to represent physical or logical objects used by jobs. A resource can be used to serialize access to a data set or to limit the number of file transfers on a particular network link. The resource does not have to represent a physical object in your configuration, although it often does. Tivoli Workload Scheduler for z/OS keeps a record of the state of each resource and its current allocation status. You can choose to hold resources in case a job allocating the resources ends abnormally. You can also use the Tivoli Workload Scheduler for z/OS interface with the Resource Object Data Manager (RODM) to schedule jobs according to real resource availability. You can subscribe to RODM updates in both local and remote domains. Tivoli Workload Scheduler for z/OS enables you to subscribe to data set activity on z/OS systems. Its dataset triggering function automatically updates special resource availability when a data set is closed. You can use this notification to coordinate planned activities or to add unplanned work to the schedule. Periods Tivoli Workload Scheduler for z/OS uses business processing cycles, or periods, to calculate when your job streams should be run; for example, weekly or every 10th working day. Periods are based on the business cycles of your customers. Tivoli Workload Scheduler for z/OS supports a range of periods for processing the different job streams in your production workload. It has several predefined periods that can be used when defining run cycles for your job streams, such as 36 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 53. week, month, year, and all of the Julian months (January through December). When you define a job stream, you specify when it should be planned using a run cycle, which can be: A rule with a format such as: ONLY the SECOND TUESDAY of every MONTH EVERY FRIDAY in the user-defined period SEMESTER1 In this example, the words in capitals are selected from lists of ordinal numbers, names of days, and common calendar intervals or period names, respectively. A combination of period and offset. For example, an offset of 10 in a monthly period specifies the tenth day of each month. Operator instr. You can specify an operator instruction to be associated with a job in a job stream. This could be, for example, special running instructions for a job or detailed restart information in case a job abends and needs to be restarted. JCL variables JCL variables are used to do automatic job tailoring in Tivoli Workload Scheduler for z/OS. There are several predefined JCL variables, such as current date, current time, planning date, day number of week, and so forth. Besides these predefined variables, you can define specific or unique variables, so your local defined variables can be used for automatic job tailoring as well. 2.1.3 Tivoli Workload Scheduler for z/OS plans IBM Tivoli Workload Scheduler for z/OS plans your production workload schedule. It produces both high-level (long-term) plan and detailed (current) plans. These plans drive the production workload and can show the status of the production workload on your system at any specified time. You can produce trial plans to forecast future workloads (for example, to simulate the effects of changes to your production workload, calendar, and installation). Tivoli Workload Scheduler for z/OS builds the plans from your description of the production workload (that is, the objects you have defined in the Tivoli Workload Scheduler for z/OS databases). The plan process First, the long-term plan is created, which shows the job streams that should be run each day in a period, usually for one or two months. Then a more detailed Chapter 2. End-to-end scheduling architecture 37
  • 54. current plan is created. The current plan is used by Tivoli Workload Scheduler for z/OS to submit and control jobs and job streams. Long-term planning The long-term plan is a high-level schedule of your anticipated production workload. It lists, by day, the instances of job streams to be run during the period of the plan. Each instance of a job stream is called an occurrence. The long-term plan shows when occurrences are to run, as well as the dependencies that exist between the job streams. You can view these dependencies graphically on your terminal as a network to check that work has been defined correctly. The plan can assist you in forecasting and planning for heavy processing days. The long-term-planning function can also produce histograms showing planned resource use for individual workstations during the plan period. You can use the long-term plan as the basis for documenting your service level agreements. It lets you relate service level agreements directly to your production workload schedules so that your customers can see when and how their work is to be processed. The long-term plan provides a window to the future. How far into the future is up to you, from one day to four years. Normally, the long-term plan goes two to three months into the future. You can also produce long-term plan simulation reports for any future date. IBM Tivoli Workload Scheduler for z/OS can automatically extend the long-term plan at regular intervals. You can print the long-term plan as a report, or you can view, alter, and extend it online using the legacy ISPF dialogs. The long-term plan extension is performed by a Tivoli Workload Scheduler for z/OS program. This program is normally run as part of the daily Tivoli Workload Scheduler for z/OS housekeeping job stream. By running this program on workdays and letting the program extend the long-term plan by one working day, you assure that the long-term plan is always up-to-date (Figure 2-4 on page 39). 38 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 55. Job Databases Resources Workstations Streams Calendars Periods 1. Extend long term plan 1 workday 90 days Long Term Plan Long Term Plan Figure 2-4 The long-term plan extension process This way the long-term plan always reflects changes that are made to job streams, run cycles, and calendars, because these definitions are reread by the program that extends the long-term plan. The long-term plan extension program reads job streams (run cycles), calendars, and periods and creates the high-level long-term plan based on these objects. Current plan The current plan, or simply the plan is the heart of Tivoli Workload Scheduler for z/OS processing: In fact, it drives the production workload automatically and provides a way to check its status. The current plan is produced by the run of batch jobs that extract from the long-term plan the occurrences that fall within the specified period of time, considering also the job details. The current plan selects a window from the long-term plan and makes the jobs ready to be run. They will actually be started depending on the decided restrictions (dependencies, resources availability, or time-dependent jobs). Job streams and related objects are copied from the Tivoli Workload Scheduler for z/OS databases to the current plan occurrences. Because the objects are copied to the current plan data set, any changes that are made to them in the plan will not be reflected in the Tivoli Workload Scheduler for z/OS databases. The current plan is a rolling plan that can cover several days. The extension of the current plan is performed by a Tivoli Workload Scheduler for z/OS program that normally is run on workdays as part of the daily workday-scheduled housekeeping job stream (Figure 2-5 on page 40). Chapter 2. End-to-end scheduling architecture 39
  • 56. Job Databases Resources Workstations Streams Calendars Periods Current Plan Old current plan Remove completed job streams Add detail for next day New current plan Extension 1 workday 90 days Long Term Plan Long Term Plan today tomorrow Figure 2-5 The current plan extension process Extending the current plan by one workday means that it can cover more than one calendar day. If, for example, Saturday and Sunday are considered as Fridays (in the calendar used by the run cycle for the housekeeping job stream), then when the current plan extension program is run on Friday afternoon and the plan will go to Monday afternoon. A common method is to cover 1–2 days with regular extensions each shift. Production workload processing activities are listed by minute in the plan. You can either print the current plan as a report, or view, alter, and extend it online, by using the legacy ISPF dialogs. Note: Changes that are made to the job stream run-cycle, such as changing the job stream from running on Mondays to running on Tuesdays, will not be reflected immediately in the long-term or current plan. To have such changes reflected in the long-term plan and current plan you must first run a Modify all or Extend long-term plan and then extend or replan the current plan. Therefore, it is a good practice to run the Extend long-term plan with one working day (shown in Figure 2-4 on page 39) before the Extend of current plan as part of normal Tivoli Workload Scheduler for z/OS housekeeping. Running job streams and jobs in the plan Tivoli Workload Scheduler for z/OS automatically: Starts and stops started tasks 40 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 57. Edits z/OS job JCL statements before submission Submits jobs in the specified sequence to the target operating environment—every time Tracks each scheduled job in the plan Determines the success or failure of the jobs Displays status information and instructions to guide workstation operators Provides automatic recovery of z/OS jobs when they end in error Generates processing dates for your job stream run cycles using rules such as: – Every second Tuesday of the month – Only the last Saturday in June, July, and August – Every third workday in the user-defined PAYROLL period Starts jobs with regard to real resource availability Performs data set cleanup in error and rerun situations for the z/OS workload Tailors the JCL for step restarts of z/OS jobs and started tasks Dynamically schedules additional processing in response to activities that cannot be planned Provides automatic notification when an updated data set is closed, which can be used to trigger subsequent processing Generates alerts when abnormal situations are detected in the workload Automatic workload submission Tivoli Workload Scheduler for z/OS automatically drives work through the system, taking into account work that requires manual or program-recorded completion. (Program-recorded completion refers to situations where the status of a scheduler-controlled job is set to Complete by a user-written program.) It also promotes the optimum use of resources, improves system availability, and automates complex and repetitive operator tasks. Tivoli Workload Scheduler for z/OS automatically controls the submission of work according to: Dependencies between jobs Workload priorities Specified times for the submission of particular work Availability of resources By saving a copy of the JCL for each separate run, or occurrence, of a particular job in its plans, Tivoli Workload Scheduler for z/OS prevents the unintentional reuse of temporary JCL changes, such as overrides. Chapter 2. End-to-end scheduling architecture 41
  • 58. Job tailoring Tivoli Workload Scheduler for z/OS provides automatic job-tailoring functions, which enable jobs to be automatically edited. This can reduce your dependency on time-consuming and error-prone manual editing of jobs. Tivoli Workload Scheduler for z/OS job tailoring provides: Automatic variable substitution Dynamic inclusion and exclusion of inline job statements Dynamic inclusion of job statements from other libraries or from an exit For jobs to be submitted on a z/OS system, these job statements will be z/OS JCL. Variables can be substituted in specific columns, and you can define verification criteria to ensure that invalid strings are not substituted. Special directives supporting the variety of date formats used by job stream programs enable you to dynamically define the required format and change them multiple times for the same job. Arithmetic expressions can be defined to let you calculate values such as the current date plus four work days. Manual control and intervention Tivoli Workload Scheduler for z/OS enables you to check the status of work and intervene manually when priorities change or when you need to run unplanned work. You can query the status of the production workload and then modify the schedule if needed. Status inquiries With the legacy ISPF dialogs or with the Job Scheduling Console, you can make queries online and receive timely information about the status of the production workload. Time information that is displayed by the dialogs can be in the local time of the dialog user. Using the dialogs, you can request detailed or summary information about individual job streams, jobs, and workstations, as well as summary information concerning workload production as a whole. You can also display dependencies graphically as a network at both job stream and job level. Status inquiries: Provide you with overall status information that you can use when considering a change in workstation capacity or when arranging an extra shift or overtime work. Help you supervise the work flow through the installation; for instance, by displaying the status of work at each workstation. 42 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 59. Help you decide whether intervention is required to speed the processing of specific job streams. You can find out which job streams are the most critical. You can also check the status of any job stream, as well as the plans and actual times for each job. Enable you to check information before making modifications to the plan. For example, you can check the status of a job stream and its dependencies before deleting it or changing its input arrival time or deadline. See “Modifying the current plan” on page 43 for more information. Provide you with information about the status of processing at a particular workstation. Perhaps work that should have arrived at the workstation has not arrived. Status inquiries can help you locate the work and find out what has happened to it. Modifying the current plan Tivoli Workload Scheduler for z/OS makes status updates to the plan automatically using its tracking functions. However, you can change the plan manually to reflect unplanned changes to the workload or to the operations environment, which often occur during a shift. For example, you may need to change the priority of a job stream, add unplanned work, or reroute work from one workstation to another. Or you may need to correct operational errors manually. Modifying the current plan may be the best way to handle these situations. You can modify the current plan online. For example, you can: Include unexpected jobs or last-minute changes to the plan. Tivoli Workload Scheduler for z/OS then automatically creates the dependencies for this work. Manually modify the status of jobs. Delete occurrences of job streams. Graphically display job dependencies before you modify them. Modify the data in job streams, including the JCL. Respond to error situations by: – Rerouting jobs – Rerunning jobs or occurrences – Completing jobs or occurrences – Changing jobs or occurrences Change the status of workstations by: – Rerouting work from one workstation to another – Modifying workstation reporting attributes Chapter 2. End-to-end scheduling architecture 43
  • 60. – Updating the availability of resources – Changing the way resources are handled Replan or extend the current plan. In addition to using the dialogs, you can modify the current plan from your own job streams using the program interface or the application programming interface. You can also trigger Tivoli Workload Scheduler for z/OS to dynamically modify the plan using TSO commands or a batch program. This enables unexpected work to be added automatically to the plan. It is important to remember that the current plan contains copies of the objects that are read from the Tivoli Workload Scheduler for z/OS databases. This means that changes that are made to current plan instances will not be reflected in the corresponding database objects. 2.1.4 Other Tivoli Workload Scheduler for z/OS features In the following sections we investigate other features of IBM Tivoli Workload Scheduler for z/OS. Automatically controlling the production workload Tivoli Workload Scheduler for z/OS automatically drives the production workload by monitoring the flow of work and by directing the processing of jobs so that it follows the business priorities that are established in the plan. Through its interface to the NetView program or its management-by-exception ISPF dialog, Tivoli Workload Scheduler for z/OS can alert the production control specialist to problems in the production workload processing. Furthermore, the NetView program can automatically trigger Tivoli Workload Scheduler for z/OS to perform corrective actions in response to these problems. Recovery and restart Tivoli Workload Scheduler for z/OS provides automatic restart facilities for your production work. You can specify the restart actions to be taken if work that it initiates ends in error (Figure 2-6 on page 45). You can use these functions to predefine automatic error-recovery and restart actions for jobs and started tasks. The scheduler’s integration with the NetView for OS/390 program enables it to automatically pass alerts to the NetView for OS/390 in error situations. Use of the z/OS cross-system coupling facility (XCF) enables Tivoli Workload Scheduler for z/OS processing when system failures occur. 44 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 61. Figure 2-6 IBM Tivoli Workload Scheduler for z/OS automatic recovery and restart Recovery of jobs and started tasks Automatic recovery actions for failed jobs are specified in user-defined control statements. Parameters in these statements determine the recovery actions to be taken when a job or started task ends in error. Restart and cleanup Restart and cleanup are basically two tasks: Restarting an operation at the job level or step level Cleaning up the associated data sets Note: The IBM Tivoli Workload Scheduler for z/OS 8.2 restart and cleanup function has been updated and redesigned. Apply fix for APAR PQ79506 and PQ79507 to get the redesigned and updated function. You can use restart and cleanup to catalog, uncatalog, or delete data sets when a job ends in error or when you need to rerun a job. Dataset cleanup takes care of JCL in the form of in-stream JCL, in-stream procedures, and cataloged procedures on both local and remote systems. This function can be initiated automatically by Tivoli Workload Scheduler for z/OS or manually by a user through the panels. Tivoli Workload Scheduler for z/OS resets the catalog to the status that it was before the job ran for both generation data set groups (GDGs) Chapter 2. End-to-end scheduling architecture 45
  • 62. and for DD allocated data sets contained in JCL. In addition, restart and cleanup supports the use of Removable Media Manager in your environment. Restart at both the step level and job level are also provided in the IBM Tivoli Workload Scheduler for z/OS legacy ISPF panels and in the JSC. It manages resolution of generation data group (GDG) names, JCL-containing nested INCLUDEs or PROC, and IF-THEN-ELSE statements. Tivoli Workload Scheduler for z/OS also automatically identifies problems that can prevent successful restart, providing a logic of the “best restart step.” You can browse the job log or request a step-level restart for any z/OS job or started task even when there are no catalog modifications. The job-log browse functions are also available for the workload on other operating platforms, which is especially useful for those environments that do not support a System Display and Search Facility (SDSF) or something similar. These facilities are available to you without the need to make changes to your current JCL. Tivoli Workload Scheduler for z/OS gives you an enterprise-wide data set cleanup capability on remote agent systems. Production workload restart Tivoli Workload Scheduler for z/OS provides a production workload restart, which can automatically maintain the processing of your work if a system or connection fails. Scheduler-controlled production work for the unsuccessful system is rerouted to another system. Because Tivoli Workload Scheduler for z/OS can restart and manage the production workload, the integrity of your processing schedule is maintained, and service continues for your customers. Tivoli Workload Scheduler for z/OS exploits the VTAM Model Application Program Definition feature and the z/OS-defined symbols to ease the configuration and job in a sysplex environment, giving the user a single-system view of the sysplex. Starting, stopping, and managing your engines and agents does not require you to know which sysplex the z/OS image is actually running on. z/OS Automatic Restart Manager support In case of program failure, all of the scheduler components are enabled to be restarted by the Automatic Restart Manager (ARM) of the z/OS operating system. Automatic status checking To track the work flow, Tivoli Workload Scheduler for z/OS interfaces directly with the operating system, collecting and analyzing status information about the production work that is currently active in the system. Tivoli Workload Scheduler for z/OS can record status information from both local and remote processors. 46 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 63. When status information is reported from remote sites in different time zones, Tivoli Workload Scheduler for z/OS makes allowances for the time differences. Status reporting from heterogeneous environments The processing on other operating environments can also be tracked by Tivoli Workload Scheduler for z/OS. You can use supplied programs to communicate with the engine from any environment that can establish communications with a z/OS system. Status reporting from user programs You can pass status information about production workload processing to Tivoli Workload Scheduler for z/OS from your own user programs through a standard supplied routine. Additional job-completion checking If required, Tivoli Workload Scheduler for z/OS provides further status checking by scanning SYSOUT and other print data sets from your processing when the success or failure of the processing cannot be determined by completion codes. For example, Tivoli Workload Scheduler for z/OS can check the text of system messages or messages originating from your user programs. Using information contained in job completion checker (JCC) tables, Tivoli Workload Scheduler for z/OS determines what actions to take when it finds certain text strings. These actions can include: Reporting errors Re-queuing SYSOUT Writing incident records to an incident data set Managing unplanned work Tivoli Workload Scheduler for z/OS can be automatically triggered to update the current plan with information about work that cannot be planned in advance. This enables Tivoli Workload Scheduler for z/OS to control unexpected work. Because it checks the processing status of this work, automatic recovery facilities are also available. Interfacing with other programs Tivoli Workload Scheduler for z/OS provides a program interface (PIF) with which you can automate most actions that you can perform online through the dialogs. This interface can be called from CLISTs, user programs, and via TSO commands. The application programming interface (API) lets your programs communicate with Tivoli Workload Scheduler for z/OS from any compliant platform. You can use Common Programming Interface for Communications (CPI-C), advanced program-to-program communication (APPC), or your own logical unit (LU) 6.2 Chapter 2. End-to-end scheduling architecture 47
  • 64. verbs to converse with Tivoli Workload Scheduler for z/OS through the API. You can use this interface to query and update the current plan. The programs can be running on any platform that is connected locally, or remotely through a network, with the z/OS system where the engine runs. Management of critical jobs IBM Tivoli Workload Scheduler for z/OS exploits the capability of the Workload Manager (WLM) component of z/OS to ensure that critical jobs are completed on time. If a critical job is late, Tivoli Workload Scheduler for z/OS favors it using the existing Workload Manager interface. Security Today, data processing operations increasingly require a high level of data security, particularly as the scope of data processing operations expands and more people within the enterprise become involved. Tivoli Workload Scheduler for z/OS provides complete security and data integrity within the range of its functions. It provides a shared central service to different user departments even when the users are in different companies and countries, and a high level of security to protect scheduler data and resources from unauthorized access. With Tivoli Workload Scheduler for z/OS, you can easily organize, isolate, and protect user data to safeguard the integrity of your end-user applications (Figure 2-7). Tivoli Workload Scheduler for z/OS can plan and control the work of many user groups and maintain complete control of access to data and services. Figure 2-7 IBM Tivoli Workload Scheduler for z/OS security 48 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 65. Audit trail With the audit trail, you can define how you want IBM Tivoli Workload Scheduler for z/OS to log accesses (both reads and updates) to scheduler resources. Because it provides a history of changes to the databases, the audit trail can be extremely useful for staff that works with debugging and problem determination. A sample program is provided for reading audit-trail records. The program reads the logs for a period that you specify and produces a report detailing changes that have been made to scheduler resources. System Authorization Facility (SAF) IBM Tivoli Workload Scheduler for z/OS uses the system authorization facility, a function of z/OS, to pass authorization verification requests to your security system (for example, RACF®). This means that you can protect your scheduler data objects with any security system that uses the SAF interface. Protection of data and resources Each user request to access a function or to access data is validated by SAF. This is some of the information that can be protected: Calendars and periods Job stream names or job stream owner, by name Workstation, by name Job stream-specific data in the plan Operator instructions JCL To support distributed, multi-user handling,Tivoli Workload Scheduler for z/OS enables you to control the level of security that you want to implement, right down to the level of individual records. You can define generic or specific RACF resource names to extend the level of security checking. If you have RACF Version 2 Release 1 installed, you can use the IBM Tivoli Workload Scheduler for z/OS reserved resource class (IBMOPC) to manage your Tivoli Workload Scheduler for z/OS security environment. This means that you do not have to define your own resource class by modifying RACF and restarting your system. Data integrity during submission Tivoli Workload Scheduler for z/OS can ensure the correct security environment for each job it submits, regardless of whether the job is run on a local or a remote system. Tivoli Workload Scheduler for z/OS enables you to create tailored security profiles for individual jobs or groups of jobs. Chapter 2. End-to-end scheduling architecture 49
  • 66. 2.2 Tivoli Workload Scheduler architecture Tivoli Workload Scheduler helps you plan every phase of production. During the processing day, its production control programs manage the production environment and automate most operator activities. Tivoli Workload Scheduler prepares jobs for execution, resolves interdependencies, and launches and tracks each job. Because jobs start running as soon as their dependencies are satisfied, idle time is minimized and throughput is improved. Jobs never run out of sequence. If a job ends in error, Tivoli Workload Scheduler handles the recovery process with little or no operator intervention. IBM Tivoli Workload Scheduler is composed of three major parts: IBM Tivoli Workload Scheduler engine The IBM Tivoli Workload Scheduler engine is installed on every non-mainframe workstation in the scheduling network (UNIX, Windows, and OS/400 computers). When the engine is installed on a workstation, it can be configured to play a specific role in the scheduling network. For example, the engine can be configured to be a master domain manager, a domain manager, or a fault-tolerant agent. In an ordinary Tivoli Workload Scheduler network, there is a single master domain manager at the top of the network. However, in an end-to-end scheduling network, there is no master domain manager. Instead, its functions are instead performed by the IBM Tivoli Workload Scheduler for z/OS engine, installed on a mainframe. This is discussed in more detail later in this chapter. IBM Tivoli Workload Scheduler connector The connector “connects” the Job Scheduling Console to Tivoli Workload Scheduler, routing commands from JSC to the Tivoli Workload Scheduler engine. In an ordinary IBM Tivoli Workload Scheduler network, the Tivoli Workload Scheduler connector is usually installed on the master domain manager. In an end-to-end scheduling network, there is no master domain manager. so the connector is usually installed on the first-level domain managers. The Tivoli Workload Scheduler connector can also be installed on other domain managers or fault-tolerant agents in the network. The connector software is installed on top of the Tivoli Management Framework, which must be configured as a Tivoli Management Region server or managed node. The connector software cannot be installed on a TMR endpoint. Job Scheduling Console (JSC) JSC is the Java-based graphical user interface for the IBM Tivoli Workload Scheduler suite. The Job Scheduling Console runs on any machine from which you want to manage Tivoli Workload Scheduler plan and database objects. It provides, through the Tivoli Workload Scheduler connector, the 50 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 67. functions of the command-line programs conman and composer. The Job Scheduling Console can be installed on a desktop workstation or laptop, as long as the JSC has a TCP/IP link with the machine running the Tivoli Workload Scheduler connector. Using the JSC, operators can schedule and administer Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS over the network. In the next sections, we provide an overview of the IBM Tivoli Workload Scheduler network and workstations, the topology that is used to describe the architecture in Tivoli Workload Scheduler, the Tivoli Workload Scheduler components, and the plan. 2.2.1 The IBM Tivoli Workload Scheduler network A Tivoli Workload Scheduler network is made up of the workstations, or CPUs, on which jobs and job streams are run. A Tivoli Workload Scheduler network contains at least one IBM Tivoli Workload Scheduler domain, the master domain, in which the master domain manager is the management hub. It is the master domain manager that manages the databases and it is from the master domain manager that you define new objects in the databases. Additional domains can be used to divide a widely distributed network into smaller, locally managed groups. In the simplest configuration, the master domain manager maintains direct communication with all of the workstations (fault-tolerant agents) in the Tivoli Workload Scheduler network. All workstations are in the same domain, MASTERDM (Figure 2-8). MASTERDM AIX Master Domain Manager FTA1 FTA2 FTA3 FTA4 Linux OS/400 Windows XP Solaris Figure 2-8 A sample IBM Tivoli Workload Scheduler network with only one domain Chapter 2. End-to-end scheduling architecture 51
  • 68. Using multiple domains reduces the amount of network traffic by reducing the communications between the master domain manager and the other computers in the network. Figure 2-9 depicts an example of a Tivoli Workload Scheduler network with three domains. In this example, the master domain manager is shown as an AIX system. The master domain manager does not have to be on an AIX system; it can be installed on any of several different platforms, including AIX, Linux, Solaris, HPUX, and Windows. Figure 2-9 is only an example that is meant to give an idea of a typical Tivoli Workload Scheduler network. MASTERDM AIX Master Domain Manager DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 Linux OS/400 Windows XP Solaris Figure 2-9 IBM Tivoli Workload Scheduler network with three domains In this configuration, the master domain manager communicates directly only with the subordinate domain managers. The subordinate domain managers communicate with the workstations in their domains. In this way, the number of connections from the master domain manager are reduced. Multiple domains also provide fault-tolerance: If the link from the master is lost, a domain manager can still manage the workstations in its domain and resolve dependencies between them. This limits the impact of a network outage. Each domain may also have one or more backup domain managers that can become the domain manager for the domain if the domain manager fails. Before the start of each day, the master domain manager creates a plan for the next 24 hours. This plan is placed in a production control file, named Symphony. Tivoli Workload Scheduler is then restarted throughout the network, and the master domain manager sends a copy of the Symphony file to each of the 52 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 69. subordinate domain managers. Each domain manager then sends a copy of the Symphony file to the fault-tolerant agents in that domain. After the network has been started, scheduling events such as job starts and completions are passed up from each workstation to its domain manager. The domain manager updates its Symphony file with the events and then passes the events up the network hierarchy to the master domain manager. The events are then applied to the Symphony file on the master domain manager. Events from all workstations in the network will be passed up to the master domain manager. In this way, the master’s Symphony file contains the authoritative record of what has happened during the production day. The master also broadcasts the changes down throughout the network, updating the Symphony files of domain managers and fault-tolerant agents that are running in full status mode. It is important to remember that Tivoli Workload Scheduler does not limit the number of domains or levels (the hirerarchy) in the network. There can be as many levels of domains as is appropriate for a given computing environment. The number of domains or levels in the network should be based on the topology of the physical network where Tivoli Workload Scheduler is installed. Most often, geographical boundaries are used to determine divisions between domains. See 3.5.4, “Network planning and considerations” on page 141 for more information about how to design an IBM Tivoli Workload Scheduler network. Figure 2-10 on page 54 shows an example of a four-tier Tivoli Workload Scheduler network: 1. Master domain manager, MASTERDM 2. DomainA and DomainB 3. DomainC, DomainD, DomainE, FTA1, FTA2, and FTA3 4. FTA4, FTA5, FTA6, FTA7, FTA8, and FTA9 Chapter 2. End-to-end scheduling architecture 53
  • 70. MASTERDM Master AIX Domain Manager DomainA DomainB Domain AIX Domain HPUX Manager Manager DMA DMB FTA1 FTA2 FTA3 HPUX Solaris AIX DomainC DomainD DomainE AIX AIX Solaris DMC DMD DME FTA4 FTA5 FTA6 FTA7 FTA8 FTA9 Linux OS/400 Win 2K Win XP AIX HPUX Figure 2-10 A multi-tiered IBM Tivoli Workload Scheduler network 2.2.2 Tivoli Workload Scheduler workstation types For most cases, workstation definitions refer to physical workstations. However, in the case of extended and network agents, the workstations are logical definitions that must be hosted by a physical IBM Tivoli Workload Scheduler workstation. There are several different types of Tivoli Workload Scheduler workstations: Master domain manager (MDM) The domain manager of the topmost domain of a Tivoli Workload Scheduler network. It contains the centralized database of all defined scheduling objects, including all jobs and their dependencies. It creates the plan at the start of each day, and performs all logging and reporting for the network. The master distributes the plan to all subordinate domain managers and fault-tolerant agents. In an end-to-end scheduling network, the IBM Tivoli Workload Scheduler for z/OS engine (controller) acts as the master domain manager. 54 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 71. Domain manager (DM) The management hub in a domain. All communications to and from the agents in a domain are routed through the domain manager. The domain manager can resolve dependencies between jobs in its subordinate agents. The copy of the plan on the domain manager is updated with reporting and logging from the subordinate agents. Backup domain manager A fault-tolerant agent that is capable of assuming the responsibilities of its domain manager. The copy of the plan on the backup domain manager is updated with the same reporting and logging information as the domain manager plan. Fault-tolerant agent (FTA) A workstation that is capable of resolving local dependencies and launching its jobs in the absence of a domain manager. It has a local copy of the plan generated in the master domain manager. It is also called a fault tolerant workstation. Standard agent (SA) A workstation that launches jobs only under the direction of its domain manager. Extended agent (XA) A logical workstation definition that enables one to launch and control jobs on other systems and applications. IBM Tivoli Workload Scheduler for Applications includes extended agent methods for the following systems: SAP R/3, Oracle Applications, PeopleSoft, CA7, JES2, and JES3. Figure 2-11 on page 56 shows a Tivoli Workload Scheduler network with some of the different workstation types. It is important to remember that domain manager FTAs, including the master domain manager FTA and backup domain manager FTAs, are FTAs with some extra responsibilities. The servers with these FTAs can, and most often will, be servers where you run normal batch jobs that are scheduled and tracked by Tivoli Workload Scheduler. This means that these servers do not have to be servers dedicated only for Tivoli Workload Scheduler work. The servers can still do some other work and run some other applications. However, you should not choose to use one of your busiest servers as one of your Tivoli Workload Scheduler domain managers of first-level. Chapter 2. End-to-end scheduling architecture 55
  • 72. MASTERDM Master AIX Domain Manager DomainA DomainB Domain AIX Domain HPUX Manager Manager DMA DMB Job Scheduling Console FTA1 FTA2 FTA3 HPUX Solaris AIX DomainC DomainD DomainE Solaris AIX AIX DMC DMD DME FTA4 FTA5 FTA6 FTA7 FTA8 FTA9 Linux OS/400 Win NT Win 2K AIX HPUX Figure 2-11 IBM Tivoli Workload Scheduler network with different workstation types 2.2.3 Tivoli Workload Scheduler topology The purpose of having multiple domains is to delegate some of the responsibilities of the master domain manager and to provide extra fault tolerance. Fault tolerance is enhanced because a domain manager can continue to resolve dependencies within the domain even if the master domain manager is temporarily unavailable. Workstations are generally grouped into a domain because they share a common set of characteristics. Most often, workstations will be grouped into a domain because they are in close physical proximity to one another, such as in the same office. Domains may also be based on organizational unit (for example, department), business function, or application. Grouping related workstations in a domain reduces the amount of information that must be communicated between domains, and thereby reduces the amount of network traffic generated. In 3.5.4, “Network planning and considerations” on page 141, you can find more information about how to configure an IBM Tivoli Workload Scheduler network based on your particular distributed network and environment. 56 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 73. 2.2.4 IBM Tivoli Workload Scheduler components Tivoli Workload Scheduler is comprised of several separate programs, each with a distinct function. This division of labor segregates networking, dependency resolution, and job launching into their own individual processes. These processes communicate among themselves through the use of message files (also called event files). Every event that occurs during the production day is handled by passing events between processes through the message files. A computer running Tivoli Workload Scheduler has several active IBM Tivoli Workload Scheduler processes. They are started as a system service, by the StartUp command, or manually from the Job Scheduling Console. The main processes are: netman The network listener program, which initially receives all TCP connections. The netman program accepts an incoming request from a remote program, spawns a new process to handle the request, and if necessary hands the socket over to the new process. writer The network writer process that passes incoming messages from a remote workstation to the local mailman process (via the Mailbox.msg event file). mailman The primary message management process. The mailman program reads events from the Mailbox.msg file and then either passes them to batchman (via the Intercom.msg event file) or sends them to a remote workstation. batchman The production control process. Working with the plan (Symphony), batchman starts jobs streams, resolves dependencies, and directs jobman to launch jobs. After the Symphony file has been created (at the beginning of the production day), batchman is the only program that makes changes to the Symphony file. jobman The job control process. The jobman program launches and monitors jobs. Figure 2-12 on page 58 shows the IBM Tivoli Workload Scheduler processes and their intercommunication via message files. Chapter 2. End-to-end scheduling architecture 57
  • 74. TWS connector Symphony & Operator input User interface TWS processes programs message files stop, start conman NetReq.msg netman & shut mailman remote JSC maestro_engine Symphony writer remote writer Changes to conman Mailbox.msg mailman Symphony JSC maestro_plan Intercom.msg batchman Courier.msg jobman Figure 2-12 IBM Tivoli Workload Scheduler interprocess communication 2.2.5 IBM Tivoli Workload Scheduler plan The IBM Tivoli Workload Scheduler plan is the to-do list that tells Tivoli Workload Scheduler what jobs to run and what dependencies must be satisfied before each job is launched. The plan usually covers 24 hours; this period is sometimes referred to as the production day and can start at any point in the day. The best time of day to create a new plan is a time when few or no jobs are expected to be running. A new plan is created at the start of the production day. After the plan has been created, a copy is sent to all subordinate workstations. The domain managers then distribute the plan to their fault-tolerant agent. The subordinate domain managers distribute their copy to all of the fault-tolerant agents in their domain and to all domain managers that are subordinate to them, and so on down the line. This enables fault-tolerant agents throughout the network to continue processing even if the network connection to their domain manager is down. From the Job Scheduling Console or the command line interface, the operator can view and make changes in the day’s production by making changes in the Symphony file. Figure 2-13 on page 59 shows the distribution of the Symphony file from master domain manager to domain managers and their subordinate agents. 58 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 75. MASTERDM AIX Master Domain Manager TWS plan DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows XP Solaris Figure 2-13 Distribution of plan (Symphony file) in a Tivoli Workload Scheduler network IBM Tivoli Workload Scheduler processes monitor the Symphony file and make calls to the operating system to launch jobs as required. The operating system runs the job, and in return informs IBM Tivoli Workload Scheduler whether the job has completed successfully or not. This information is entered into the Symphony file to indicate the status of the job. This way the Symphony file is continuously updated with the status of all jobs: the work that needs to be done, the work in progress, and the work that has been completed. 2.3 End-to-end scheduling architecture In the two previous sections, 2.2, “Tivoli Workload Scheduler architecture” on page 50, and 2.1, “IBM Tivoli Workload Scheduler for z/OS architecture” on page 27, we described the architecture of Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS. In this section, we bring the two together; here we describe how the programs work together to function as a unified end-to-end scheduling solution. End-to-end scheduling makes it possible to schedule and control jobs on mainframe, Windows, and UNIX environments, providing truly distributed scheduling. In the end-to-end configuration, Tivoli Workload Scheduler for z/OS Chapter 2. End-to-end scheduling architecture 59
  • 76. is used as the planner for the job scheduling environment. Tivoli Workload Scheduler domain managers and fault-tolerant agents are used to schedule on the non-mainframe platforms, such as UNIX and Windows. 2.3.1 How end-to-end scheduling works End-to-end scheduling means controlling scheduling from one end of an enterprise to the other — from the mainframe all the way down to the client workstation. Tivoli Workload Scheduler provides an end-to-end scheduling solution whereby one or more IBM Tivoli Workload Scheduler domain managers, and its underlying agents and domains, are put under the direct control of an IBM Tivoli Workload Scheduler for z/OS engine. To the domain managers and FTAs in the network, the IBM Tivoli Workload Scheduler for z/OS engine appears to be the master domain manager. Tivoli Workload Scheduler for z/OS creates the plan (the Symphony file) for the Tivoli Workload Scheduler network and sends the plan down to the first-level domain managers. Each of these domain managers sends the plan to all of the subordinate workstations in its domain. The domain managers act as brokers for the distributed network by resolving all dependencies for the subordinate managers and agents. They send their updates (in the form of events) to Tivoli Workload Scheduler for z/OS, which updates the plan accordingly. Tivoli Workload Scheduler for z/OS handles its own jobs and notifies the domain managers of all the status changes of its jobs that involve the IBM Tivoli Workload Scheduler plan. In this configuration, the domain manager and all the Tivoli Workload Scheduler workstations recognize Tivoli Workload Scheduler for z/OS as the master domain manager and notify it of all of the changes occurring in their own plans. At the same time, the agents are not permitted to interfere with the Tivoli Workload Scheduler for z/OS jobs, because they are viewed as running on the master that is the only node that is in charge of them. In Figure 2-14 on page 61, you can see a Tivoli Workload Scheduler network managed by a Tivoli Workload Scheduler for z/OS engine. This is accomplished by connecting a Tivoli Workload Scheduler domain manager directly to the Tivoli Workload Scheduler for z/OS engine. Actually, if you compare Figure 2-9 on page 52 with Figure 2-14 on page 61, you will see that the Tivoli Workload Scheduler network that is connected to Tivoli Workload Scheduler for z/OS is managed by a Tivoli Workload Scheduler master domain manager. When connecting this network to the engine, the AIX server that was acting as the Tivoli Workload Scheduler master domain manager is replaced by a mainframe. The new master domain manager is the Tivoli Workload Scheduler for z/OS engine. 60 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 77. MASTERDM z/OS Master Domain Manager OPCMASTER TWS for z/OS Engine DomainA Controller DomainB AIX Server HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 Linux OS/400 Windows XP Solaris Figure 2-14 IBM Tivoli Workload Scheduler for z/OS end-to-end scheduling In Tivoli Workload Scheduler for z/OS, you can access job streams (also known as schedules in Tivoli Workload Scheduler and applications in Tivoli Workload Scheduler for z/OS) and add them to the current plan in Tivoli Workload Scheduler for z/OS. In addition, you can build dependencies among Tivoli Workload Scheduler for z/OS job streams and Tivoli Workload Scheduler jobs. From Tivoli Workload Scheduler for z/OS, you can monitor and control the FTAs. In the Tivoli Workload Scheduler for z/OS current plan, you can specify jobs to run on workstations in the Tivoli Workload Scheduler network. The Tivoli Workload Scheduler for z/OS engine passes the job information to the Symphony file in the Tivoli Workload Scheduler for z/OS server, which in turn passes the Symphony file to the first-level Tivoli Workload Scheduler domain managers to distribute and process. In turn, Tivoli Workload Scheduler reports the status of running and completed jobs back to the current plan for monitoring in the Tivoli Workload Scheduler for z/OS engine. The IBM Tivoli Workload Scheduler for z/OS engine is comprised of two components (started tasks on the mainframe): the controller and the server (also called the end-to-end server). Chapter 2. End-to-end scheduling architecture 61
  • 78. 2.3.2 Tivoli Workload Scheduler for z/OS end-to-end components To run the Tivoli Workload Scheduler for z/OS end-to-end, you must have a Tivoli Workload Scheduler for z/OS server started task dedicated for end-to-end scheduling. It is also possible to use the same server to communicate with the Job Scheduling Console. The Tivoli Workload Scheduler for z/OS uses TCP/IP for communication. The Tivoli Workload Scheduler for z/OS controller uses the end-to-end server to communicate events to the FTAs. The end-to-end server will start multiple tasks and processes using the z/OS UNIX System Services (USS). The Tivoli Workload Scheduler for z/OS end-to-end server must run on the same z/OS systems where the served Tivoli Workload Scheduler for z/OS controller is started and active. Tivoli Workload Scheduler for z/OS end-to-end scheduling is comprised of three major components: The IBM Tivoli Workload Scheduler for z/OS controller: Manages database objects, creates plans with the workload, and executes and monitors the workload in the plan. The IBM Tivoli Workload Scheduler for z/OS server: Acts as the Tivoli Workload Scheduler master domain manager. It receives a part of the current plan (the Symphony file) from the Tivoli Workload Scheduler for z/OS controller, which contains job and job streams to be executed in the Tivoli Workload Scheduler network. The server is the focal point for all communication to and from the Tivoli Workload Scheduler network. IBM Tivoli Workload Scheduler domain managers at the first level: Serve as the communication hub between the Tivoli Workload Scheduler for z/OS server and the distributed Tivoli Workload Scheduler network. The domain managers at first level are connected directly to the Tivoli Workload Scheduler master domain manager running in USS in the Tivoli Workload Scheduler for z/OS end-to-end server. In Tivoli Workload Scheduler for z/OS 8.2, you can have one or several Tivoli Workload Scheduler domain managers at the first level. These domain managers are connected directly to the Tivoli Workload Scheduler for z/OS end-to-end server, so they are called first-level domain managers. It is possible to designate Tivoli Workload Scheduler for z/OS backup domain managers for the first-level Tivoli Workload Scheduler domain managers (as it is for “normal” Tivoli Workload Scheduler fault-tolerant agents and domain managers). 62 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 79. Detailed description of the communication Figure 2-15 shows the communication between the Tivoli Workload Scheduler for z/OS controller and the Tivoli Workload Scheduler for z/OS server. TWS for z/OS Engine TWS for z/OS Controller TWS for z/OS Server programs running in USS Symphony & TWS translator threads message files processes start & stop events only NetReq.msg netman TWSCS GS GS translator spawns writer end-to-end mailman remote enabler WA output Symphony writer sender TWSOU translator subtask spawns threads NMM receiver remote writer input input Mailbox.msg mailman subtask TWSIN writer translator EM job log script retriever downloader Intercom.msg batchman tomaster.msg remote remote scribner dwnldr Figure 2-15 IBM Tivoli Workload Scheduler for z/OS 8.2 interprocess communication Tivoli Workload Scheduler for z/OS server processes and tasks The end-to-end server address space hosts the tasks and the data sets that function as the intermediaries between the controller and the domain managers of first level. In many cases, these tasks and data sets are replicas of the distributed Tivoli Workload Scheduler processes and files. The Tivoli Workload Scheduler for z/OS server uses the following processes, threads, and tasks for end-to-end scheduling (see Figure 2-15): netman The Tivoli Workload Scheduler network listener daemon. It is started automatically when the end-to-end server task starts. The netman process monitors the NetReq.msg queue and listens to the TCP port defined in the server topology portnumber parameter. (Default is port 31111.) When netman receives a request, it starts another program to handle the request, usually writer or mailman. Requests to start or stop mailman are written by output Chapter 2. End-to-end scheduling architecture 63
  • 80. translator to the NetReq.msg queue. Requests to start or stop writer are sent via TCP by the mailman process on a remote workstation (domain manager at the first level). writer One writer process is started by netman for each connected remote workstation (domain manager at the first level). Each writer process receives events from the mailman process on a remote workstation and writes these events to the Mailbox.msg file. mailman The main message handler process. Its main tasks are: Routing events. It reads the events stored in the Mailbox.msg queue and sends them either to the controller (writing them in the Intercom.msg file), or to the writer process on a remote workstation (via TCP). Linking to remote workstations (domain managers at the first level). The mailman process requests that the netman program on each remote workstation starts a writer process to accept the connection. Sending the Symphony file to subordinate workstations (domain managers at the first level). When a new Symphony file is created, the mailman process sends a copy of the file to each subordinate domain manager and fault-tolerant agent. batchman Updates the Symphony file and resolves dependencies at master level. After the Symphony file has been written the first time, batchman is the only program that makes changes to the file. The batchman program in USS does not perform job submission; this is why there is no jobman process running in UNIX System Services). translator Through its input and output threads (discussed in more detail below), the translator process translates events from Tivoli Workload Scheduler format to Tivoli Workload Scheduler for z/OS format and vice versa. The translator program was developed specifically to handle the job of event translation from OPC events to Maestro events, and vice versa. The translator process runs in UNIX System Services on the mainframe; it does not run on domain managers or FTAs. The translator program provides the glue that binds Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler together; translator enables 64 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 81. these two products to function as a unified scheduling system. job log retriever A thread of the translator process that is spawned to fetch a job log from a fault-tolerant agent. One job log retriever thread is spawned for each requested FTA job log. The job log retriever receives the log, sizes it according to the LOGLINES parameter, translates it from UTF-8 to EBCDIC, and queues it in the inbound queue of the controller. The retrieval of a job log is a lengthy operation and can take a few moments to complete. The user may request several logs at the same time. The job log retriever thread terminates after the log has been written to the inbound queue. If using the IBM Tivoli Workload Scheduler for z/OS ISPF panel interface, the user will be notified by a message when the job log has been received. script downloader A thread of the translator process that is spawned to download the script for an operation (job) defined in Tivoli Workload Scheduler with the Centralized Script option set to Yes. One script downloader thread is spawned for each script that must be downloaded. Several script downloader threads can be active at the same time. The script that is to be downloaded is received from the output translator. starter The basic or main process in the end-to-end server UNIX System Services. The starter process is the first process that is started in UNIX System Services when the end-to-end server started task is started. The starter process starts the translator and the netman processes (not shown in Figure 2-15 on page 63). Events passed from the server to the controller input translator A thread of the translator process. The input translator thread reads events from the tomaster.msg file and translates them from Tivoli Workload Scheduler format to Tivoli Workload Scheduler for z/OS format. It also performs UTF-8 to EBCDIC translation and sends the translated events to the input writer. input writer Receives the input from the job log retriever, input translator, and script downloader and writes it in the inbound queue (the EQQTWSIN data set). Chapter 2. End-to-end scheduling architecture 65
  • 82. receiver subtask A subtask of the end-to-end task run in the Tivoli Workload Scheduler for z/OS controller. It receives events from the inbound queue and queues them to the Event Manager task. The events have already been filtered and elaborated by the input translator. Events passed from the controller to the server sender subtask A subtask of the end-to-end task in the Tivoli Workload Scheduler for z/OS controller. It receives events for changes to the current plan that is related to Tivoli Workload Scheduler fault-tolerant agents. The Tivoli Workload Scheduler for z/OS tasks that can change the current plan are: General Service (GS), Normal Mode Manager (NMM), Event Manager (EM), and Workstation Analyzer (WA). The events are received via SSI, the usual method the Tivoli Workload Scheduler for z/OS tasks use to exchanged events. The NMM sends events to the sender task when the plan is extended or replanned for synchronization purposes. output translator A thread of the translator process. The output translator thread reads events from the outbound queue. It translates the events from Tivoli Workload Scheduler for z/OS format to Tivoli Workload Scheduler format and evaluates them, performing the appropriate function. Most events, including those related to changes to the Symphony file, are written to Mailbox.msg. Requests to start or stop netman or mailman are written to NetReq.msg. Output translator also translates events from EBCDIC to UTF-8. The output translator interacts with three different components, depending on the type of the event: Starts a job log retriever thread if the event is to retrieve the log of a job from a Tivoli Workload Scheduler agent. Starts a script downloader thread if the event is to download the script. Queues an event in NetReq.msg if the event is to start or stop mailman. Queues events in Mailbox.msg for the other events that are sent to update the Symphony file on the Tivoli Workload Scheduler agents (for example, events for a job that has changed status, events for manual changes 66 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 83. on jobs or workstations, or events to link or unlink workstations). Switches the Symphony files. IBM Tivoli Workload Scheduler for z/OS datasets and files used for end-to-end scheduling The Tivoli Workload Scheduler for z/OS server and controller uses the following data sets and files for end-to-end scheduling: EQQTWSIN Sequential data set used to queue events sent by the server to the controller (the inbound queue). Must be defined in Tivoli Workload Scheduler for z/OS controller and the end-to-end server started task procedure (shown as TWSIN in Figure 2-15 on page 63). EQQTWSOU Sequential data set used to queue events sent by the controller to the server (the outbound queue). Must be defined in Tivoli Workload Scheduler for z/OS controller and the end-to-end server started task procedure (shown as TWSOU in Figure 2-15 on page 63). EQQTWSCS Partitioned data set used to temporarily store a script when it is downloaded from the Tivoli Workload Scheduler for z/OS JOBLIB data set to the fault-tolerant agent for its submission. This data set is shown as TWSCS in Figure 2-15 on page 63. This data set is described in “Tivoli Workload Scheduler for z/OS end-to-end database objects” on page 69. It is not shown in Figure 2-15 on page 63. Symphony HFS file containing the active copy of the plan used by the distributed Tivoli Workload Scheduler agents. Sinfonia HFS file containing the distribution copy of the plan used by the distributed Tivoli Workload Scheduler agents. This file is not shown in Figure 2-15 on page 63. NetReq.msg HFS file used to queue requests for the netman process. Mailbox.msg HFS file used to queue events sent to the mailman process. intercom.msg HFS file used to queue events sent to the batchman process. tomaster.msg HFS file used to queue events sent to the input translator process. Chapter 2. End-to-end scheduling architecture 67
  • 84. Translator.chk HFS file used as checkpoint file for the translator process. It is equivalent to the checkpoint data set used by the Tivoli Workload Scheduler for z/OS controller. For example, it contains information about the status of the Tivoli Workload Scheduler for z/OS current plan, Symphony run number, Symphony availability. This file is not shown in Figure 2-15 on page 63. Translator.wjl HFS file used to store information about job log retrieval and script downloading that are in progress. At initialization, the translator checks the translator.wjl file for job log retrieval and script downloading that did not complete (both correctly or in error) and sends the error back to the controller. This file is not shown in Figure 2-15 on page 63. EQQSCLIB Partitioned data set used as a repository for jobs with non-centralized script definitions running on FTAs. The EQQSCLIB data set is described in “Tivoli Workload Scheduler for z/OS end-to-end database objects” on page 69. It is not shown in Figure 2-15 on page 63. EQQSCPDS VSAM data sets containing a copy of the current plan used by the daily plan batch programs to create the Symphony file. The end-to-end plan creating process is described in 2.3.4, “Tivoli Workload Scheduler for z/OS end-to-end plans” on page 75. It is not shown in Figure 2-15 on page 63. 2.3.3 Tivoli Workload Scheduler for z/OS end-to-end configuration The topology of the distributed IBM Tivoli Workload Scheduler network that is connected to the IBM Tivoli Workload Scheduler for z/OS engine is described in parameter statements for the Tivoli Workload Scheduler for z/OS server and for the Tivoli Workload Scheduler for z/OS programs that handle the long-term plan and the current plan. Parameter statements are also used to activate the end-to-end subtasks in the Tivoli Workload Scheduler for z/OS controller. The parameter statements that are used to describe the topology is covered in 4.2.6, “Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 174. This section also includes an example of how to reflect a specific Tivoli Workload Scheduler network topology in Tivoli Workload 68 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 85. Scheduler for z/OS servers and plan programs using the Tivoli Workload Scheduler for z/OS topology parameter statements. Tivoli Workload Scheduler for z/OS end-to-end database objects In order to run jobs on fault-tolerant agents or extended agents, one must first define database objects related to the Tivoli Workload Scheduler workload in Tivoli Workload Scheduler for z/OS databases. The Tivoli Workload Scheduler for z/OS end-to-end related database objects are: IBM Tivoli Workload Scheduler for z/OS fault tolerant workstations A fault tolerant workstation is a computer workstation configured to schedule jobs on FTAs. The workstation must also be defined in the server CPUREC initialization statement (see Figure 2-16 on page 70). IBM Tivoli Workload Scheduler for z/OS job streams, jobs, and dependencies Job streams and jobs to run on Tivoli Workload Scheduler FTAs are defined like other job streams and jobs in Tivoli Workload Scheduler for z/OS. To run a job on a Tivoli Workload Scheduler FTA, the job is simply defined on a fault tolerant workstation. Dependencies between Tivoli Workload Scheduler distributed jobs are created exactly the same way as other job dependencies in the Tivoli Workload Scheduler for z/OS controller. This is also the case when creating dependencies between Tivoli Workload Scheduler distributed jobs and Tivoli Workload Scheduler for z/OS mainframe jobs. Some of the Tivoli Workload Scheduler for z/OS mainframe-specific options are not available for Tivoli Workload Scheduler distributed jobs. Chapter 2. End-to-end scheduling architecture 69
  • 86. F100 workstation definition in ISPF: Topology definition for F100 workstation: F100 workstation definition in JSC: Figure 2-16 A workstation definition and its corresponding CPUREC IBM Tivoli Workload Scheduler for z/OS resources Only global resources are supported and can be used for Tivoli Workload Scheduler distributed jobs. This means that the resource dependency is resolved by the Tivoli Workload Scheduler for z/OS controller and not locally on the FTA. For a job running on an FTA, the use of resources causes the loss of fault tolerance. Only the controller determines the availability of a resource and consequently lets the FTA start the job. Thus, if a job running on an FTA uses a resource, the following occurs: – When the resource is available, the controller sets the state of the job to started and the extended status to waiting for submission. – The controller sends a release-dependency event to the FTA. – The FTA starts the job. If the connection between the engine and the FTA is broken, the operation does not start on the FTA even if the resource becomes available. 70 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 87. Note: Special resource dependencies are represented differently depending on whether you are looking at the job through Tivoli Workload Scheduler for z/OS interfaces or Tivoli Workload Scheduler interfaces. If you observe the job using Tivoli Workload Scheduler for z/OS interfaces, you can see the resource dependencies as expected. However, when you monitor a job on a fault-tolerant agent by means of the Tivoli Workload Scheduler interfaces, you will not be able to see the resource that is used by the job. Instead you will see a dependency on a job called OPCMASTER#GLOBAL.SPECIAL_RESOURCES. This dependency is set by the engine. Every job that has special resource dependencies has a dependency to this job. When the engine allocates the resource for the job, the dependency is released. (The engine sends a release event for the specific job through the network.) The task or script associated with the FTA job, defined in Tivoli Workload Scheduler for z/OS In IBM Tivoli Workload Scheduler for z/OS 8.2, the task or script associated to the FTA job can be defined in two different ways: a. Non-centralized script Defined in a special partitioned data set, EQQSCLIB, allocated in the Tivoli Workload Scheduler for z/OS controller started task procedure, stores the job or task definitions for FTA jobs. The script (the JCL) resides on the fault-tolerant agent. This is the default behavior in Tivoli Workload Scheduler for z/OS for fault-tolerant agent jobs. b. Centralized script Defines the job in Tivoli Workload Scheduler for z/OS with the Centralized Script option set to Y (Yes). Note: The default for all operations and jobs in Tivoli Workload Scheduler for z/OS is N (No). A centralized script resides in the IBM Tivoli Workload Scheduler for z/OS JOBLIB and is downloaded to the fault-tolerant agent every time the job is submitted. The concept of centralized script has been added for compatibility with the way that Tivoli Workload Scheduler for z/OS manages jobs in the z/OS environment. Chapter 2. End-to-end scheduling architecture 71
  • 88. Non-centralized script For every FTA job definition in Tivoli Workload Scheduler for z/OS where the centralized script option is set to N (non-centralized script) there must be a corresponding member in the EQQSCLIB data set. The members of EQQSCLIB contain a JOBREC statement that describes the path to the job or the command to be executed and eventually the user to be used when the job or command is executed. Example for a UNIX script: JOBREC JOBSCR(/Tivoli/tws/scripts/script001_accounting) JOBUSR(userid01) Example for a UNIX command: JOBREC JOBCMD(ls) JOBUSR(userid01) If the JOBUSR (user for the job) keyword is not specified, the user defined in the CPUUSER keyword of the CPUREC statement for the fault-tolerant workstation is used. If necessary, Tivoli Workload Scheduler for z/OS JCL variables can be used in the JOBREC definition. Tivoli Workload Scheduler for z/OS JCL variables and variable substitution in a EQQSCLIB member is managed and controlled by VARSUB statements placed directly in the EQQSCLIB member with the JOBREC definition for the particular job. Furthermore, it is possible to define Tivoli Workload Scheduler recovery options for the job defined in the JOBREC statement. Tivoli Workload Scheduler recovery options are defined with RECOVERY statements placed directly in the EQQSCLIB member with the JOBREC definition for the particular job. The JOBREC (and optionally VARSUB and RECOVERY) definitions are read by the Tivoli Workload Scheduler for z/OS plan programs when producing the new current plan and placed as part of the job definition in the Symphony file. If a Tivoli Workload Scheduler distributed job stream is added to the plan in Tivoli Workload Scheduler for z/OS, the JOBREC definition will be read by Tivoli Workload Scheduler for z/OS, copied to the Symphony file on the Tivoli Workload Scheduler for z/OS server, and sent (as events) by the server to the Tivoli Workload Scheduler agent Symphony files via the directly connected Tivoli Workload Scheduler domain managers. It is important to remember that the EQQSCLIB member only has a pointer (the path) to the job that is going to be executed. The actual job (the JCL) is placed locally on the FTA or workstation in the directory defined by the JOBREC JOBSCR definition. 72 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 89. This also means that it is not possible to use the JCL edit function in Tivoli Workload Scheduler for z/OS to edit the script (the JCL) for jobs where the script (the pointer) is defined by a JOBREC statement in the EQQSCLIB data set. Centralized script Script for a job defined with centralized script option set to Y must be defined in Tivoli Workload Scheduler for z/OS JOBLIB. The script is defined the same way as normal JCL. It is possible (but not necessary) to define some parameters of the centralized script, such as the user, in a job definition member of the SCRPTLIB data set. With centralized scripts, you can perform variable substitution, automatic recovery, JCL editing, and job setup (as for “normal” z/OS jobs defined in the Tivoli Workload Scheduler for z/OS JOBLIB). It is also possible to use the job-submit exit (EQQUX001). Note that jobs with centralized script will be defined in the Symphony file with a dependency named script. This dependency will be released when the job is ready to run and the script is downloaded from the Tivoli Workload Scheduler for z/OS controller to the fault-tolerant agent. To download a centralized script, the DD statement EQQTWSCS must be present in the controller and server started tasks. During the download the <twshome>/centralized directory is created at the fault-tolerant workstation. The script is downloaded to this directory. If an error occurs during this operation, the controller retries the download every 30 seconds for a maximum of 10 times. If the script download still fails after 10 retries, the job (operation) is marked as Ended-in-error with error code OSUF. Here are the detailed steps for downloading and executing centralized scripts on FTAs (Figure 2-17 on page 75): 1. Tivoli Workload Scheduler for z/OS controller instructs sender subtask to begin script download. 2. The sender subtask writes the centralized script to the centralized scripts data set (EQQTWSCS). 3. The sender subtask writes a script download event (type JCL, action D) to the output queue (EQQTWSOU). 4. The output translator thread reads the JCL-D event from the output queue. 5. The output translator thread reads the script from the centralized scripts data set (EQQTWSCS). 6. The output translator thread spawns a script downloader thread. Chapter 2. End-to-end scheduling architecture 73
  • 90. 7. The script downloader thread connects directly to netman on the FTA where the script will run. 8. netman spawns dwnldr and connects the socket from the script downloader thread to the new dwnldr process. 9. dwnldr downloads the script from the script downloader thread and writes it to the TWSHome/centralized directory on the FTA. 10.dwnldr notifies the script downloader thread of the result of the download. 11.The script downloader thread passes the result to the input writer thread. 12.If the script download was successful, the input writer thread writes a script download successful event (type JCL, action C) on the input queue (EQQTWSIN). If the script download was unsuccessful, the input writer thread writes a a script download in error event (type JCL, action E) on the input queue. 13.The receiver subtask reads the script download result event from the input queue. 14.The receiver subtask notifies the Tivoli Workload Scheduler for z/OS controller of the result of the script download. If the result of the script download was successful, the OPC controller then sends a release dependency event (type JCL, action R) to the FTA, via the normal IPC channel (sender subtask → output queue → output translator → Mailbox.msg → mailman → writer on FTA, and so on). This event causes the job to run. 74 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 91. MASTERDM z/OS 1 3 4 OPC Controller sender subtask out output translator Master Domain 2 5 14 cs Manager 6 13 12 11 receiver subtask in input writer script downloader DomainZ AIX Domain Manager DMZ DomainA DomainB HPUX 10 7 Domain AIX Domain Manager Manager DMA DMB netman FTA1 FTA2 FTA3 FTA4 8 dwnldr AIX OS/400 Windows XP Solaris myscript.sh 9 Figure 2-17 Steps and processes for downloading centralized script Creating centralized script in the Tivoli Workload Scheduler for z/OS JOBLIB data set is described in 4.5.2, “Definition of centralized scripts” on page 219. 2.3.4 Tivoli Workload Scheduler for z/OS end-to-end plans When scheduling jobs in the Tivoli Workload Scheduler environment, current plan processing also includes the automatic generation of the Symphony file that goes to the IBM Tivoli Workload Scheduler for z/OS server and IBM Tivoli Workload Scheduler subordinate domain managers as well as fault-tolerant agents. The Tivoli Workload Scheduler for z/OS current plan program is normally run on workdays in the engine as described in 2.1.3, “Tivoli Workload Scheduler for z/OS plans” on page 37. Chapter 2. End-to-end scheduling architecture 75
  • 92. Figure 2-18 shows a combined view of long-term planning and current planning. Changes to the databases need an update of the long-term plan, thus most site run the LTP Modify batch job immediately before extending the current plan. Databases Job Resources Workstations Calendars Periods Streams Steps of plan 1. Extend long term plan extension 2. Extend current plan 90 days 1 workday Plan LTP Long Term Plan extension today tomorrow Details of Remove Add detail current plan Old current completed job for next New current extension plan streams day plan Figure 2-18 Combined view of the long-term planning and current planning If the end-to-end feature is activated in Tivoli Workload Scheduler for z/OS, the current plan program will read the topology definitions described in the TOPLOGY, DOMREC, CPUREC, and USRREC initialization statements (see 2.3.3, “Tivoli Workload Scheduler for z/OS end-to-end configuration” on page 68) and the script library (EQQSCLIB) as part of the planning process. Information from the initialization statements and the script library will be used to create a Symphony file for the Tivoli Workload Scheduler FTAs (see Figure 2-19 on page 77). The whole process is handled by Tivoli Workload Scheduler for z/OS planning programs. 76 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 93. Job Databases Resources Workstations Streams Current Plan Remove completed Add detail Old current plan job streams New current plan Extension for next day & Replan 1. Extract TWS plan form current plan 2. Add topology (domain, workstation) 3. Add task definition (path and user) for New Symphony distributed TWS jobs Script Topology library Definitions Figure 2-19 Creation of Symphony file in Tivoli Workload Scheduler for z/OS plan programs The process is handled by Tivoli Workload Scheduler for z/OS planning programs, which is described in the next section. Detailed description of the Symphony creation Figure 2-15 on page 63 gives a description of the tasks and processes involved in the Symphony creation. Chapter 2. End-to-end scheduling architecture 77
  • 94. TWS for z/OS Engine TWS for z/OS Controller TWS for z/OS Server programs running in USS Symphony & TWS translator threads message files processes start & stop events only NetReq.msg netman TWSCS GS translator GS spawns writer end-to-end mailman remote enabler WA output Symphony writer sender TWSOU translator subtask spawns threads NMM receiver remote writer input input Mailbox.msg subtask TWSIN mailman writer translator EM job log script retriever downloader Intercom.msg batchman tomaster.msg remote remote scribner dwnldr Figure 2-20 IBM Tivoli Workload Scheduler for z/OS 8.2 interprocess communication 1. The process is handled by Tivoli Workload Scheduler for z/OS planning batch programs. The batch produces the NCP and initializes the symUSER. 2. The Normal Node Manager (NMM) sends the SYNC START ('S') event to the server, and the E2E receiver starts, leaving all events in the inbound queue (TWSIN). 3. When the SYNC START ('S') is processed by the output translator, it stops the OPCMASTER, sends the SYNC END ('E') to the controller, and stops the entire network. 4. The NMM applies the job tracking events received while the new plan was produced. It then copies the new current plan data set (NCP) to the Tivoli Workload Scheduler for z/OS current plan data set (CP1 or CP2), make a current plan backup up (copies active CP1/CP2 to inactive CP1/CP2) and creates the Symphony Current Plan (SCP) data set as a copy of the active current plan (CP1 or CP2) data set. 5. Tivoli Workload Scheduler for z/OS mainframe schedule is resumed. 6. The end-to-end receiver begins to process events in the queue. 78 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 95. 7. The SYNC CPREADY ('Y') is sent to the output translator and starts, leaving in the outbound queue (TWSOU) all the events. 8. The plan program starts producing the SymUSER file starting from SCP and then renames it Symnew. 9. When the Symnew file has been created, the plan program ends and NMM notifies the output translator that the Symnew file is ready, sending the SYNC SYMREADY ('R') event to the output translator. 10.The output translator renames old Symphony and Sinfonia files to Symold and Sinfold files, and a Symphony OK ('X') or NOT OK ('B') Sync event is sent to the Tivoli Workload Scheduler for z/OS engine, which logs a message in the engine message log indicating whether the Symphony has been switched. 11.The Tivoli Workload Scheduler for z/OS server master is started in USS and the Input Translator starts to process new events. As in Tivoli Workload Scheduler distributed, mailman and batchman process events are left in local event files and start distributing the new Symphony file to the whole IBM Tivoli Workload Scheduler network. When the Symphony file is created by the Tivoli Workload Scheduler for z/OS plan programs, it (or, more precisely, the Sinfonia file) will be distributed to the Tivoli Workload Scheduler for z/OS subordinate domain manager, which in turn distributes the Symphony (Sinfonia) file to its subordinate domain managers and fault-tolerant agents. (See Figure 2-21 on page 80.) Chapter 2. End-to-end scheduling architecture 79
  • 96. MASTERDM z/OS Master The TWS plan is extracted Domain from the TWS for z/OS plan Manager TWS for TWS plan z/OS plan DomainZ AIX Domain The TWS plan is then distributed Manager to the subordinate DMs and FTAs DMZ TWS plan DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows 2000 Solaris Figure 2-21 Symphony file distribution from ITWS for z/OS server to ITWS agents The Symphony file is generated: Every time the Tivoli Workload Scheduler for z/OS plan is extended or replanned When a Symphony renew batch job is submitted (from Tivoli Workload Scheduler for z/OS legacy ISPF panels, option 3.5) The Symphony file contains: Jobs to be executed on Tivoli Workload Scheduler FTAs z/OS (mainframe) jobs that are predecessors to Tivoli Workload Scheduler distributed jobs Job streams that have at least one job in the Symphony file Topology information for the distributed network with all the workstation and domain definitions, including the master domain manager of the distributed network; that is, the Tivoli Workload Scheduler for z/OS host. 80 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 97. After the Symphony file is created and distributed to the Tivoli Workload Scheduler FTAs, the Symphony file is updated by events: When job status changes When jobs or job streams are modified When jobs or job streams for the Tivoli Workload Scheduler FTAs are added to the plan in the Tivoli Workload Scheduler for z/OS controller. If you look at the Symphony file locally on a Tivoli Workload Scheduler FTA, from the Job Scheduling Console, or using the Tivoli Workload Scheduler command line interface to the plan (conman), you will see that: The Tivoli Workload Scheduler workstation has the same name as the related workstation defined in Tivoli Workload Scheduler for z/OS for the agent. OPCMASTER is the hard-coded name for the master domain manager workstation for the Tivoli Workload Scheduler for z/OS controller. The name of the job stream (or schedule) is the hexadecimal representation of the occurrence (job stream instance) token (internal unique and invariant identifier for occurrences). The job streams are always defined on the OPCMASTER workstation. (Having no dependencies, this does not reduce fault tolerance.) See Figure 2-22 on page 82. Using this hexadecimal representation for the job stream instances makes it possible to have several instances for the same job stream, because they have unique job stream names. Therefore, it is possible to have a plan in the Tivoli Workload Scheduler for z/OS controller and a distributed Symphony file that spans more than 24 hours. Note: In Tivoli Workload Scheduler for z/OS, the key in the plan for an occurrence is job stream name and input arrival time. In the Symphony file, the key is the job stream instance name. Since Tivoli Workload Scheduler for z/OS can have several job stream instances with the same name in the plan, it is necessary with an unique and invariant identifier (the occurrence token) for the occurrence or job stream instance name in the Symphony file. The job name is made up according to one of the following formats (see Figure 2-22 on page 82 for an example): – <T>_<opnum>_<applname> when the job is created in the Symphony file – <T>_<opnum>_<ext>_<applname> when the job is first deleted from the current plan and then recreated in the current plan Chapter 2. End-to-end scheduling architecture 81
  • 98. In these examples: – <T> is J for normal jobs (operations), P for jobs that are representing pending predecessors, or R for recovery jobs (jobs added by Tivoli Workload Scheduler recovery). – <opnum> is the operation number for the job in the job stream (in current plan). – <ext> is a sequential number that is incremented every time the same operation is deleted and then recreated in current plan; if 0, it is omitted. – <applname> is the name of the occurrence (job stream) the operation belongs to. Job name and workstation for Job Stream name and workstation for distributed job in Symphony file job stream in Symphony file Figure 2-22 Job name and job stream name as generated in the Symphony file Tivoli Workload Scheduler for z/OS uses the job name and an operation number as "key" for the job in a job stream. In the Symphony file it is only the job name that is used as key. Since Tivoli Workload Scheduler for z/OS can have the same job name several times in on job stream and distinguishes between identical job names with the operation number, the job names generated in the Symphony file contains the Tivoli Workload Scheduler for z/OS operation number as part of the job name. The name of a job stream (application) can contain national characters such as dollar ($), sect (§), and pound (£). These characters are converted into dashes (-) in the names of included jobs when the job stream is added to the symphony file or when the Symphony file is created. For example, consider the job stream name: APPL$$234§§ABC£ In the Symphony file, the names of the jobs in this job stream will be: <T>_<opnum>_APPL--234--ABC- This nomenclature is still valid because the job stream instance (occurrence) is identified by the occurrence token, and the operations are each identified by the 82 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 99. operation numbers (<opnum>) that are part of the job names in the Symphony file. Note: The criteria that are used to generate job names in the Symphony file can be managed by the Tivoli Workload Scheduler for z/OS JTOPTS TWSJOBNAME() parameter, which was introduced with APAR PQ77970. It is possible, for example, to use the job name (from the operation) instead of the job stream name for the job name in the Symphony file, so the job name will be <T>_<opnum>_<jobname> in the Symphony file. In normal situations, the Symphony file is automatically generated as part of the Tivoli Workload Scheduler for z/OS plan process. The topology definitions are read and built into the Symphony file as part of the Tivoli Workload Scheduler for z/OS plan programs, so regular operation situations can occur where you need to renew (or rebuild) the Symphony file from the Tivoli Workload Scheduler for z/OS plan: When you make changes to the script library or to the definitions of the TOPOLOGY statement When you add or change information in the plan, such as workstation definitions To have the Symphony file rebuilt or renewed, you can use the Symphony Renew option of the Daily Planning menu (option 3.5 in the legacy IBM Tivoli Workload Scheduler for z/OS ISPF panels). This renew function can also be used to recover from error situations such as: A non-valid job definition in the script library Incorrect workstation definitions An incorrect Windows user name or password Changes to the script library or to the definitions of the TOPOLOGY statement In 5.8.5, “Common errors for jobs on fault-tolerant workstations” on page 334, we describe how to correct several of these error situations without redistributing the Symphony file. It is worth it to get familiar with these alternatives before you start redistributing a Symphony file in a heavily loaded production environment. Chapter 2. End-to-end scheduling architecture 83
  • 100. 2.3.5 Making the end-to-end scheduling system fault tolerant In the following, we cover some possible cases of failure in end-to-end scheduling and ways to mitigate against these failures: 1. The Tivoli Workload Scheduler for z/OS engine (controller) can fail due to a system or task outage. 2. The Tivoli Workload Scheduler for z/OS server can fail due to a system or task outage. 3. The domain managers at the first level, that is the domain managers directly connected to the Tivoli Workload Scheduler for z/OS server, can fail due to a system or task outage. To avoid an outage of the end-to-end workload managed in the Tivoli Workload Scheduler for z/OS engine and server and in the Tivoli Workload Scheduler domain manager, you should consider: Using a standby engine (controller) for the Tivoli Workload Scheduler for z/OS engine (controller). Making sure that your Tivoli Workload Scheduler for z/OS server can be reached if the Tivoli Workload Scheduler for z/OS engine (controller) is moved to one of its standby engines (TCP/IP configuration in your enterprise). Remember that the end-to-end server started task always must be active on the same z/OS system as the active engine (controller). Defining backup domain managers for your Tivoli Workload Scheduler domain managers at the first level. Note: It is a good practice to define backup domain managers for all domain managers in the distributed Tivoli Workload Scheduler network. Figure 2-23 shows an example of a fault-tolerant end-to-end network with a Tivoli Workload Scheduler for z/OS standby controller engine and one Tivoli Workload Scheduler backup domain manager for one Tivoli Workload Scheduler domain manager at the first level. 84 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 101. MASTERDM Standby Standby Engine Engine z/OS SYSPLEX Active Engine Server DomainZ Domain AIX AIX Backup Manager Domain DMZ Manager (FTA) DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows 2000 Solaris Figure 2-23 Redundant configuration with standby engine and IBM Tivoli Workload Scheduler backup DM If the domain manager for DomainZ fails, it will be possible to switch to the backup domain manager. The backup domain manager has an updated Symphony file and knows the subordinate domain managers and fault-tolerant agents, so it can take over the responsibilities of the domain manager. This switch can be performed without any outages in the workload management. If the switch to the backup domain manager is going to be active across the Tivoli Workload Scheduler for z/OS plan extension, you must change the topology definitions in the Tivoli Workload Scheduler for z/OS DOMREC initialization statements. The backup domain manager fault tolerant workstation is going to be the domain manager at the first level for the Tivoli Workload Scheduler distributed network, even after the plan extension. Example 2-1 shows how to change the name of the fault tolerant workstation in the DOMREC initialization statement, if the switch to the backup domain manager is effective across the Tivoli Workload Scheduler for z/OS plan extension. (See 5.5.4, “Switch to Tivoli Workload Scheduler backup domain manager” on page 308 for more information.) Chapter 2. End-to-end scheduling architecture 85
  • 102. Example 2-1 DOMREC initialization statement DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMZ) DOMPARENT(MASTERDM) Should be changed to: DOMREC DOMAIN(DOMAINZ) DOMMGR(FDMB) DOMPARENT(MASTERDM) Where FDMB is the name of the fault tolerant workstation where the backup domain manager is running. If the Tivoli Workload Scheduler for z/OS engine or server fails, it will be possible to let one of the standby engines in the same sysplex take over. This takeover can be accomplished without any outages in the workload management. The Tivoli Workload Scheduler for z/OS server must follow the Tivoli Workload Scheduler for z/OS engine. That is, if the Tivoli Workload Scheduler for z/OS engine is moved to another system in the sysplex, the Tivoli Workload Scheduler for z/OS server must be moved to the same system in the sysplex. Note: The synchronization between the Symphony file on the Tivoli Workload Scheduler domain manager and the Symphony file on its backup domain manager has improved considerably with FixPack 04 for IBM Tivoli Workload Scheduler, in which an enhanced and improved fault tolerant switch manager functionality is introduced. 2.3.6 Benefits of end-to-end scheduling The benefits that can be gained from using the Tivoli Workload Scheduler for z/OS end-to-end scheduling include: The ability to connect Tivoli Workload Scheduler fault-tolerant agents to an Tivoli Workload Scheduler for z/OS controller. Scheduling on additional operating systems. The ability to define resource dependencies between jobs that run on different FTAs or in different domains. Synchronizing work in mainframe and distributed environments. The ability to organize the scheduling network into multiple tiers, delegating some responsibilities to Tivoli Workload Scheduler domain managers. Extended planning capabilities, such as the use of long-term plans, trial plans, and extended plans, also for the Tivoli Workload Scheduler network. Extended plans also means that the current plan can span more than 24 hours. One possible benefit is being able to extend a current plan over a time 86 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 103. period when no one will be available to verify that the current plan was successfully created each day, such as over a holiday weekend. The end-to-end environment also allows the current plan to be extended for a specified length of time, or to replan the current plan to remove completed jobs. Powerful run-cycle and calendar functions. Tivoli Workload Scheduler end-to-end enables more complex run cycles and rules to be defined to determine when a job stream should be scheduled. Ability to create a Trial Plan that can span more than 24 hours. Improved use of resources (keep resource if job ends in error). Enhanced use of host names instead of dotted IP addresses. Multiple job or job stream instances in the same plan. In the end-to-end environment, job streams are renamed using a unique identifier so that multiple job stream instances can be included in the current plan. The ability to use batch tools (for example, Batchloader, Massupdate, OCL, BCIT) that enable batched changes to be made to the Tivoli Workload Scheduler end-to-end database and plan. The ability to specify at the job level whether the job’s script should be centralized (placed in Tivoli Workload Scheduler for z/OS JOBLIB) or non-centralized (placed locally on the Tivoli Workload Scheduler agent). Use of Tivoli Workload Scheduler for z/OS JCL variables in both centralized and non-centralized scripts. The ability to use Tivoli Workload Scheduler for z/OS recovery in centralized scripts or Tivoli Workload Scheduler recovery in non-centralized scripts. The ability to define and browse operator instructions associated with jobs in the database and plan. In a Tivoli Workload Scheduler distributed environment, it is possible to insert comments or a description in a job definition, but these comments and description are not visible from the plan functions. The ability to define a job stream that will be submitted automatically to Tivoli Workload Scheduler when one of the following events occurs in the z/OS system: a particular job is executed or terminated in the z/OS system, a specified resource becomes available, or a z/OS dataset is created or opened. Chapter 2. End-to-end scheduling architecture 87
  • 104. Considerations Implementing Tivoli Workload Scheduler for z/OS end-to-end also imposes some limitations: Windows users’ passwords are defined directly (without any encryption) in the Tivoli Workload Scheduler for z/OS server initialization parameters. It is possible to place these definitions in a separate library with restricted access (restricted by RACF, for example) to authorized persons. In an end-to-end configuration, some of the conman command options are disabled. On an end-to-end FTA, the conman command only allows display operations and the subset of commands (such as kill, altpass, link/unlink, start/stop, switchmgr) that do not affect the status or sequence of jobs. Command options that could affect the information that is contained in the Symphony file are not allowed. For a complete list of allowed conman commands, refer to 2.7, “conman commands in the end-to-end environment” on page 106. Workstation classes are not supported in an end-to-end configuration. The LIMIT attribute is supported on the workstation level, not on the job stream level in an end-to-end environment. Some Tivoli Workload Scheduler functions are not available directly on Tivoli Workload Scheduler FTAs, but can be handled by other functions in Tivoli Workload Scheduler for z/OS. For example: – IBM Tivoli Workload Scheduler prompts • Recovery prompts are supported. • The Tivoli Workload Scheduler predefined and ad hoc prompts can be replaced with the manual workstation function in Tivoli Workload Scheduler for z/OS. – IBM Tivoli Workload Scheduler file dependencies • It is not possible to define file dependencies directly at job level in Tivoli Workload Scheduler for z/OS for distributed Tivoli Workload Scheduler jobs. • The filewatch program that is delivered with Tivoli Workload Scheduler can be used to create file dependencies for distributed jobs in Tivoli Workload Scheduler for z/OS. Using the filewatch program, the file dependency is “replaced” by a job dependency in which a predecessor job checks for the file using the filewatch program. 88 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 105. – Dependencies on job stream level The traditional way to handle these types of dependencies in Tivoli Workload Scheduler for z/OS is to define a “dummy start” and “dummy end” job at the beginning and end of the job streams, respectively. – Repeat range (that is, rerun this job every 10 minutes”) Although there is no built-in function for this in Tivoli Workload Scheduler for z/OS, it can be accomplished in different ways, such as by defining the job repeatedly in the job stream with specific start times or by using a PIF (Tivoli Workload Scheduler for z/OS Programming Interface) program to rerun the job every 10 minutes. – Job priority change Job priority cannot be changed directly for an individual fault-tolerant job. In an end-to-end configuration, it is possible to change the priority of a job stream. When the priority of a job stream is changed, all jobs within the job stream will have the same priority. – Internetwork dependencies An end-to-end configuration supports dependencies on a job that is running in the same Tivoli Workload Scheduler end-to-end or distributed topology (network). 2.4 Job Scheduling Console and related components The Job Scheduling Console (JSC) provides another way of working with Tivoli Workload Scheduler for z/OS databases and current plan. The JSC is a graphical user interface that connects to the Tivoli Workload Scheduler for z/OS engine via a Tivoli Workload Scheduler for z/OS TCP/IP server task. Usually this task is dedicated exclusively to handling JSC communications. Later in this book, the server task that is dedicated to JSC communications will be referred to as the JSC server (Figure 2-24 on page 90). The TCP/IP server is a separate address space, started and stopped either automatically by the engine or by the user via the z/OS start and stop commands. More than one TCP/IP server can be associated with an engine. Chapter 2. End-to-end scheduling architecture 89
  • 106. TWS for z/OS Engine Databases Master JSC Server Domain Current Plan Manager TMR OPC Server Connector Tivoli Management Framework Job Job Job Scheduling Scheduling Scheduling Console Console Console Figure 2-24 Communication between JSC and ITWS for z/OS via the JSC Server The Job Scheduling Console can be run on almost any platform. Using the JSC, an operator can access both Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS scheduling engines. In order to communicate with the scheduling engines, the JSC requires several additional components to be installed: Tivoli Management Framework Job Scheduling Services (JSS) Tivoli Workload Scheduler connector, Tivoli Workload Scheduler for z/OS connector, or both The Job Scheduling Services and the connectors must be installed on top of the Tivoli Management Framework. Together, the Tivoli Management Framework, the Job Scheduling Services, and the connector provide the interface between JSC and the scheduling engine. The Job Scheduling Console is installed locally on your desktop computer, laptop computer, or workstation. 2.4.1 A brief introduction to the Tivoli Management Framework Tivoli Management Framework provides the foundation on which the Job Scheduling Services and connectors are installed. It also performs access verification when a Job Scheduling Console user logs in. The Tivoli Management Environment (TME®) uses the concept of Tivoli Management Regions (TMRs). There is a single server for each TMR, called the TMR server; this is analogous 90 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 107. to the IBM Tivoli Workload Scheduler master server. The TMR server contains the Tivoli object repository (a database used by the TMR). Managed nodes are semi-independent agents that are installed on other nodes in the network; these are roughly analogous to Tivoli Workload Scheduler fault-tolerant agents. For more information about the Tivoli Management Framework, see the IBM Tivoli Management Framework 4.1 User’s Guide, GC32-0805. 2.4.2 Job Scheduling Services (JSS) The Job Scheduling Services component provides a unified interface in the Tivoli Management Framework for different job scheduling engines. Job Scheduling Services does not do anything on its own; it requires additional components called connectors in order to connect to job scheduling engines. It must be installed on either the TMR server or a managed node. 2.4.3 Connectors Connectors are the components that enable the Job Scheduling Services to talk with different types of scheduling engines. When working with a particular type of scheduling engine, the Job Scheduling Console communicates with the scheduling engine via the Job Scheduling Services and the connector. A different connector is required for each type of scheduling engine. A connector can only be installed on a computer where the Tivoli Management Framework and Job Scheduling Services have already been installed. There are two types of connectors for connecting to the two types of scheduling engines in the IBM Tivoli Workload Scheduler 8.2 suite: IBM Tivoli Workload Scheduler for z/OS connector (or OPC connector) IBM Tivoli Workload Scheduler connector Job Scheduling Services communicates with the engine via the connector of the appropriate type. When working with a Tivoli Workload Scheduler for z/OS engine, the JSC communicates via the Tivoli Workload Scheduler for z/OS connector. When working with a Tivoli Workload Scheduler engine, the JSC communicates via the Tivoli Workload Scheduler connector. The two types of connectors function somewhat differently: The Tivoli Workload Scheduler for z/OS connector communicates over TCP/IP with the Tivoli Workload Scheduler for z/OS engine running on a mainframe (MVS or z/OS) computer. The Tivoli Workload Scheduler connector performs direct reads and writes of the Tivoli Workload Scheduler plan and database files on the same computer as where the Tivoli Workload Scheduler connector runs. Chapter 2. End-to-end scheduling architecture 91
  • 108. A connector instance must be created before the connector can be used. Each type of connector can have multiple instances. A separate instance is required for each engine that will be controlled by JSC. We will now discuss each type of connector in more detail. Tivoli Workload Scheduler for z/OS connector Also sometimes called the OPC connector, the Tivoli Workload Scheduler for z/OS connector can be instantiated on any TMR server or managed node. The Tivoli Workload Scheduler for z/OS connector instance communicates via TCP with the Tivoli Workload Scheduler for z/OS TCP/IP server. You might, for example, have two different Tivoli Workload Scheduler for z/OS engines that both must be accessible from the Job Scheduling Console. In this case, you would install one connector instance for working with one Tivoli Workload Scheduler for z/OS engine, and another connector instance for communicating with the other engine. When a Tivoli Workload Scheduler for z/OS connector instance is created, the IP address (or host name) and TCP port number of the Tivoli Workload Scheduler for z/OS engine’s TCP/IP server are specified. The Tivoli Workload Scheduler for z/OS connector uses these two pieces of information to connect to the Tivoli Workload Scheduler for z/OS engine. See Figure 2-25 on page 93. Tivoli Workload Scheduler connector The Tivoli Workload Scheduler connector must be instantiated on the host where the Tivoli Workload Scheduler engine is installed so that it can access the plan and database files locally. This means that the Tivoli Management Framework must be installed (either as a TMR server or managed node) on the server where the Tivoli Workload Scheduler engine resides. Usually, this server is the Tivoli Workload Scheduler master domain manager. But it may also be desirable to connect with JSC to another domain manager or to a fault-tolerant agent. If multiple instances of Tivoli Workload Scheduler are installed on a server, it is possible to have one Tivoli Workload Scheduler connector instance for each Tivoli Workload Scheduler instance on the server. When a Tivoli Workload Scheduler connector instance is created, the full path to the Tivoli Workload Scheduler home directory associated with that Tivoli Workload Scheduler instance is specified. This is how the Tivoli Workload Scheduler connector knows where to find the Tivoli Workload Scheduler databases and plan. See Figure 2-25 on page 93. Connector instances We now give some examples of how connector instances might be installed in the real world. 92 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 109. One connector instance of each type In Figure 2-25, there are two connector instances, including one Tivoli Workload Scheduler for z/OS connector instance and one Tivoli Workload Scheduler connector instance : The Tivoli Workload Scheduler for z/OS connector instance is associated with a Tivoli Workload Scheduler for z/OS engine running in a remote sysplex. Communication between the connector instance and the remote scheduling engine is conducted over a TCP connection. The Tivoli Workload Scheduler connector instance is associated with a Tivoli Workload Scheduler engine installed on the same AIX server. The Tivoli Workload Scheduler connector instance reads from and writes to the plan (the Symphony file) of the Tivoli Workload Scheduler engine. MASTERDM TWS for z/OS z/OS Master Databases JSC Server Domain Current Manager Plan DomainA AIX TWS Domain TWS OPC Symphony Connector Connector Manager DMB Framework Other DMs and FTAs Job Scheduling Console Figure 2-25 One ITWS for z/OS connector and one ITWS connector instance Chapter 2. End-to-end scheduling architecture 93
  • 110. Tip: Tivoli Workload Scheduler connector instances must be created on the server where the Tivoli Workload Scheduler engine is installed. This is because the connector must be able to have access locally to the Tivoli Workload Scheduler engine (specifically, to the plan and database files). This limitation obviously does not apply to Tivoli Workload Scheduler for z/OS connector instances because the Tivoli Workload Scheduler for z/OS connector communicates with the remote Tivoli Workload Scheduler for z/OS engine over TCP/IP. In this example, the connectors are installed on the domain manager DMB. This domain manager has one connector instance of each type: A Tivoli Workload Scheduler connector to monitor the plan file (Symphony) locally on DMB A Tivoli Workload Scheduler for z/OS (OPC) connector to work with the databases and current plan on the mainframe Having the Tivoli Workload Scheduler connector installed on a DM provides the operator with the ability to use JSC to look directly at the Symphony file on that workstation. This is particularly useful in the event that problems arise during the production day. If any discrepancy appears between the state of a job in the Tivoli Workload Scheduler for z/OS current plan and the Symphony file on an FTA, it is useful to be able to look at the Symphony file directly. Another benefit is that retrieval of job logs from an FTA is much faster when the job log is retrieved through the Tivoli Workload Scheduler connector. If the job log is fetched through the Tivoli Workload Scheduler for z/OS engine, it can take much longer. Connectors on multiple domain managers With the previous version of IBM Tivoli Workload Scheduler — Version 8.1 — it was necessary to have a single primary domain manager that was the parent of all other domain managers. Figure 2-25 on page 93 shows an example of such an arrangement. Tivoli Workload Scheduler 8.2 removes this limitation. With Version 8.2, it is possible to have more than one domain manager directly under the master domain manager. Most end-to-end scheduling networks will have more than one domain manager under the master. For this reason, it is a good idea to install the Tivoli Workload Scheduler connector and OPC connector on more than one domain manager. 94 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 111. MASTERDM TWS for z/OS z/OS Master Databases JSC Server Domain Current Manager Plan DomainA DomainB AIX TWS AIX TWS Domain TWS OPC Domain TWS OPC Manager Connector Connector Manager Connector Connector DMA Symphony Framework DMA Symphony Framework Other DMs and Other DMs and FTAs FTAs Job Scheduling Console Figure 2-26 An example with two connector instances of each type Note: It is a good idea to set up more than one Tivoli Workload Scheduler for z/OS connector instance associated with the engine (as in Figure 2-26). This way, if there is a problem with one of the workstations running the connector, JSC users will still be able to access the Tivoli Workload Scheduler for z/OS engine via the other connector. If JSC access is important to your enterprise, it is vital to set up redundant connector instances like this. Next, we discuss the connectors in more detail. The connector programs These are the programs that run behind the scenes to make the connectors work. Each program and its function is described below. Programs of the IBM Tivoli Workload Scheduler for z/OS connector The programs that comprise the Tivoli Workload Scheduler for z/OS connector are located in $BINDIR/OPC (Figure 2-27 on page 96). Chapter 2. End-to-end scheduling architecture 95
  • 112. TWS for z/OS (OPC) TWS for z/OS z/OS Databases JSC Server Current Plan TMR Server or Managed Node with JSS AIX opc_connector opc_connector2 oserv Job Scheduling Console Figure 2-27 Programs of the IBM Tivoli Workload Scheduler for z/OS (OPC) connector opc_connector The main connector program that contains the implementation of the main connector methods (basically all the methods that are required to connect to and retrieve data from Tivoli Workload Scheduler for z/OS engine). It is implemented as a threaded daemon, which means that it is automatically started by the Tivoli Framework at the first request that should be handled by it, and it will stay active until there has not been a request for a long time. After it is started, it handles starting new threads for all JSC requests that require data from a specific Tivoli Workload Scheduler for z/OS engine. opc_connector2 A small connector program that contains the implementation for small methods that do not require data from Tivoli Workload Scheduler for z/OS. This program is implemented per method, which means that Tivoli Framework starts this program when a method implemented by it is called, the process performs the action for this method, and then is terminated. This is useful for methods (like the ones called by JSC when it starts and asks for information from all of the connectors) that can be isolated and not logical to maintain the process activity. 96 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 113. Programs of the IBM Tivoli Workload Scheduler connector The programs that comprise the Tivoli Workload Scheduler connector are located in $BINDIR/Maestro (Figure 2-28). TWS TWS TWS Connector TMF Databases maestro_database Symphony maestro_plan start & stop netman maestro_engine oserv events, SAP pick lists: r3batch jobs, tasks, maestro_x_server variants, etc. joblog_retriever remote SAP host remote scribner Job Scheduling Console Figure 2-28 Programs of the IBM Tivoli Workload Scheduler connector maestro_engine The maestro_engine program performs authentication when a user logs in via the Job Scheduling Console. It also starts and stops the Tivoli Workload Scheduler engine. It is started by the Tivoli Management Framework (specifically, the oserv program) when a user logs in from JSC. It terminates after 30 minutes of inactivity. Note: oserv is the Tivoli service that is used as the object request broker (ORB). This service runs on the Tivoli management region server and each managed node. maestro_plan The maestro_plan program reads from and writes to the Tivoli Workload Scheduler plan. It also handles switching to a different plan. The program is started when a user accesses the plan. It terminates after 30 minutes of inactivity. Chapter 2. End-to-end scheduling architecture 97
  • 114. maestro_database The maestro_database program reads from and writes to the Tivoli Workload Scheduler database files. It is started when a JSC user accesses a database object or creates a new object definition. It terminates after 30 minutes of inactivity. job_instance_output The job_instance_ouput program retrieves job standard list files. It is started when a JSC user runs the Browse Job Log operation. It starts up, retrieves the requested stdlist file, and then terminates. maestro_x_server The maestro_x_server program is used to provide an interface to certain types of extended agents, such as the SAP R/3 extended agent (r3batch). It starts up when a command is run in JSC that requires execution of an agent method. It runs the X-agent method, returns the output, and then terminates. It only runs on workstations that host an r3batch extended agent. 2.5 Job log retrieval in an end-to-end environment In this section, we cover the detailed steps of job log retrieval in an end-to-end environment using the JSC. There are different steps involved depending on which connector you are using to retrieve the job log and whether the firewalls are involved. We cover all of these scenarios: using the Tivoli Workload Scheduler (distributed) connector (via the domain manager or first-level domain manager), using the Tivoli Workload Scheduler for z/OS (or OPC) connector, and with the firewalls in the picture. 2.5.1 Job log retrieval via the Tivoli Workload Scheduler connector As shown in Figure 2-29 on page 99, the steps behind the scenes in an end-to-end scheduling network when retrieving the job log via the domain manager (using the Tivoli Workload Scheduler (distributed) connector) are: 1. Operator requests joblog in Job Scheduling Console. 2. JSC connects to oserv running on the domain manager. 3. oserv spawns job_instance_output to fetch the job log. 4. job_instance_output communicates over TCP directly with the workstation where the joblog exists, bypassing the domain manager. 5. netman on that workstation spawns scribner and hands over the TCP connection with job_instance_output to the new scribner process. 6. scribner retrieves the joblog. 98 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 115. 7. scribner sends the joblog to job_instance_output on the master. 8. job_instance_ouput relays the job log to oserv. 9. oserv sends the job log to JSC. MASTERDM z/OS Master Domain Manager DomainZ AIX Domain oserv Manager 8 3 9 DMZ 2 Job job_instance_output Scheduling Console 4 DomainA DomainB HPUX Domain AIX Domain Manager Manager DMA DMB 7 netman FTA1 FTA2 FTA3 FTA4 5 scribner AIX OS/400 Windows XP Solaris 013780.0559 6 Figure 2-29 Job log retrieval in an end-to-end scheduling network via the domain manager 2.5.2 Job log retrieval via the OPC connector As shown in Figure 2-30 on page 101, the following steps take place behind the scenes in an end-to-end scheduling network when retrieving the job log using the OPC connector. The initial request for joblog is done: 1. Operator requests joblog in Job Scheduling Console. 2. JSC connects to oserv running on the domain manager. 3. oserv tells the OPC connector program to request the joblog from the OPC system. Chapter 2. End-to-end scheduling architecture 99
  • 116. 4. opc_connector relays the request to the JSC Server task on the mainframe. 5. The JSC Server requests the job log from the controller. The next step depends on whether the job log has already been retrieved. If the job log has already been retrieved, skip to step 17. If the job log has not been retrieved yet, continue with step 6. Assuming that the log has not been retrieved already: 6. The controller sends the request for the joblog to the sender subtask. 7. The controller sends a message to the operator indicating that the job log has been requested. This message is displayed in a dialog box in JSC. (The message is sent via this path: Controller → JSC Server → opc_connector → oserv → JSC). 8. The sender subtask sends the request to the output translator, via the output queue. 9. The output translator thread reads the request and spawns a job log retriever thread to handle it. 10.The job log retriever thread opens a TCP connection directly to the workstation where the job log exists, bypassing the domain manager. 11.netman on that workstation spawns scribner and hands over the TCP connection with the job log retriever to the new scribner process. 12.scribner retrieves the job log. 13.scribner sends the joblog to the job log retriever thread. 14.The job log retriever thread passes the job log to the input writer thread 15.The input writer thread sends the job log to the receiver subtask, via the input queue 16.The receiver subtask sends the job log to the controller When the operator requests the job log a second time, the first five steps are the same as in the initial request (above). This time around, because the job log has already been received by the controller; 17.The controller sends the job log to the JSC Server. 18.The JSC Server sends the information to the OPC connector program running on the domain manager. 19.The IBM Tivoli Workload Scheduler for z/OS connector relays the job log to oserv. 20.oserv relays the job log to JSC and JSC displays the job log in a new window. 100 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 117. 8 MASTERDM z/OS OPC Controller 6 sender subtask out output translator Master 17 5 16 9 Domain JSC Server receiver subtask in input writer Manager 15 14 job log retriever DMZ AIX 18 4 10 Domain opc_connector Manager DMZ 19 3 oserv DMA DMB HPUX Domain AIX Domain Manager Manager DMA DMB 2 Job Scheduling 20 Console 1 13 netman Cannot load the Job output. Reason: EQQMA41I The engine has requested to the remote agent the joblog info needed to FTA1 FTA2 FTA3 FTA4 11 process the command. Please, retry later. scribner ->EQQM637I A JOBLOG IS NEEDED TO PROCESS THE COMMAND. IT HAS BEEN REQUESTED. AIX OS/400 Windows XP Solaris 013780.0559 12 7 Figure 2-30 Job log retrieval in an end-to-end network via the ITWS for z/OS- no FIREWALL=Y configured 2.5.3 Job log retrieval when firewalls are involved When the firewalls are involved (that is, FIREWALL=Y configured in the CPUREC definition of the workstation in which the job log is retrieved), the steps for retrieving the job log in an end-to-end scheduling network are different. These steps are shown in Figure 2-31 on page 102. Note that the firewall is configured to allow only the following traffic: DMY → DMA and DMZ → DMB. 1. Operator requests job log in JSC, or the mainframe ISPF panels. 2. TCP connection is opened to the parent domain manager of the with the workstation where the job log exists. 3. netman on that workstation spawns router and hands over the TCP socket to the new router process. Chapter 2. End-to-end scheduling architecture 101
  • 118. 4. router opens a TCP connection to netman on the parent domain manager of the workstation where the job log exists, because this DM is also behind the firewall. 5. netman on the DM spawns router and hands over the TCP socket with router to the new router process. 6. router opens a TCP connection to netman on the workstation where the job log exists. 7. netman on that workstation spawns scribner and hands over the TCP socket with router to the new scribner process. 8. scribner retrieves the job log. 9. scribner on FTA4 sends the job log to router on DMB. 10.router sends the job log to the router program running on DMZ. Domain 1 Job log is requested Manager or z/OS Master DomainY 2 DomainZ AIX AIX 11 netman Domain Domain 3 Manager Manager router DMY DMZ 4 Firewall DomainA 10 DomainB HPUX Domain AIX Domain netman Manager Manager 5 DMA DMB router FIREWALL(Y) 6 9 netman FTA1 FTA2 FTA3 FTA4 7 scribner AIX OS/400 Windows XP Solaris FIREWALL(Y) 013780.0559 8 Figure 2-31 Job log retrieval in an end-to-end network via the ITWS for z/OS- with FIREWALL=Y configured It is important to note that in the previous scenario, you should not configure the domain manager DMB as FIREWALL=N in its CPUREC definition. If you do, you 102 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 119. will not be able to retrieve the job log from FTA4, even though FTA4 is configured as FIREWALL=Y. This is shown is Figure 2-32. In this case, when the TCP connection to the parent domain manager of the workstation where the job log exists (DMB) is blocked by the firewall, the connection request is not received by netman on DMB. The firewall does not allow direct connections from DMZ to FTA4. The only connections from DMZ that are permitted are those that go to DMB. Because DMB has FIREWALL=N, the connection did not go through DMZ – it tried to go straight to FTA4. Domain 1 Job log is requested Manager or z/OS master DomainY DomainZ AIX AIX Domain Domain Manager Manager DMY DMZ 2 Firewall DomainA DomainB HPUX Domain AIX Domain netman Manager Manager DMA DMB FIREWALL=N FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows XP Solaris FIREWALL=Y Figure 2-32 Wrong configuration: connection blocked 2.6 Tivoli Workload Scheduler, important files, and directory structure Figure 2-33 on page 104 shows the most important files in the Tivoli Workload Scheduler 8.2 working directory in USS (WRKDIR). Chapter 2. End-to-end scheduling architecture 103
  • 120. Symbol Legend Color Legend options & config. files databases event queues Only found on E2E Server in HFS on mainframe WRKDIR (not found on Unix or Windows workstations) plans logs localopts TWSCCLog.propterties SymX Symbad Symold Symnew Sinfonia Symphony Mailbox.msg Intercom.msg audit mozart network version pobox Translator.wjl Translator.chk stdlist globalopts NetConf ServerN.msg logs mastsked NetReq.msg FTA.msg jobs tomaster.msg YYYYMMDD_NETMAN.log YYYYMMDD_TWSMERGE.log YYYYMMDD_E2EMERGE.log Figure 2-33 The most important files in the Tivoli Workload Scheduler 8.2 working directory in USS The descriptions of the files are: SymX (where X is the name of the user that ran the CP extend or Symphony renew job): A temporary file created during a CP extend or Symphony renew. This file is copied to Symnew, which is then copied to Sinfonia and Symphony. Symbad (Bad Symphony) Only created if CP extend or Symphony renew results in an invalid Symphony. Symold (Old Symphony) From prior to most recent CP extend or Symphony renew. Translator.wjl Translator event log for requested job logs. Translator.chk Translator checkpoint file. 104 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 121. YYYYMMDD_E2EMERGE.log Translator log. Note: The Symnew, SymX, and Symbad files are temporary files and normally cannot be seen in USS work directory. Figure 2-34 shows the most important files in the Tivoli Workload Scheduler 8.2 binary directory in USS (BINDIR). The options files in the config subdirectory are only reference copies of these files; they are not active configuration files. Symbol Legend Color Legend options & config. files Only found on E2E Server in HFS on mainframe BINDIR (not found on Unix or Windows workstations) scripts & programs catalog codeset bin config zoneinfo NetConf globalopts localopts batchman config IBM mailman configure netman translator EQQBTCHM EQQCNFG0 starter EQQCNFGR EQQMLMN0 writer EQQNTMN0 EQQSTRTR EQQTRNSL EQQWRTR0 Figure 2-34 A list of the most important files in the Tivoli Workload Scheduler 8.2 binary directory in USS Figure 2-35 on page 106 shows the Tivoli Workload Scheduler 8.2 directory structure on the fault-tolerant agents. Note that the database files (such as jobs and calendars) are not used in the Tivoli Workload Scheduler 8.2 end-to-end scheduling environment. Chapter 2. End-to-end scheduling architecture 105
  • 122. Legend database file option file tws Security network parameters bin mozart schedlog stdlist audit pobox version localopts cpudata userdata mastsked jobs calendars prompts resources globalopts Figure 2-35 Tivoli Workload Scheduler 8.2 directory structure on the fault-tolerant agents 2.7 conman commands in the end-to-end environment In Tivoli Workload Scheduler, you can use the conman command line interface to manage the distributed production. A subset of these commands can also be used in end-to-end scheduling. In general, command options that could affect the information contained in the Symphony file are not allowed. Disallowed conman command options include add and remove dependencies, submit and cancel jobs, and so forth. Figure 2-36 on page 107 and Figure 2-37 on page 107 list the conman commands that are available on end-to-end fault-tolerant workstations in a Tivoli Workload Scheduler 8.2 end-to-end scheduling network. Note that in the Type field, M stands for domain managers, F for fault-tolerant agents and A stands for standard agents. Note: The composer command line interface, which is used to manage database objects in a distributed Tivoli Workload Scheduler environment, is not used in end-to-end scheduling because in end-to-end scheduling, the databases are located on the Tivoli Workload Scheduler for z/OS master. 106 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 123. Figure 2-36 conman commands available in end-to-end environment Figure 2-37 conman commands available in end-to-end environment Chapter 2. End-to-end scheduling architecture 107
  • 124. 108 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 125. 3 Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 In this chapter, we provide details on how to plan for end-to-end scheduling with Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler, and the Job Scheduling Console. The chapter covers two areas: 1. Before the installation is performed Here we describe what to consider before performing the installation and how to order the product. This includes the following sections: – “Different ways to do end-to-end scheduling” on page 111 – “The rationale behind end-to-end scheduling” on page 112 – “Before you start the installation” on page 113 2. Planning for end-to-end scheduling Here we describe relevant planning issues that should be considered and handled before the actual installation and customization of Tivoli Workload © Copyright IBM Corp. 2004 109
  • 126. Scheduler for z/OS, Tivoli Workload Scheduler, and Job Scheduling Console is performed. This includes the following sections: – “Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS” on page 116 – “Planning for end-to-end scheduling with Tivoli Workload Scheduler” on page 139 – “Planning for the Job Scheduling Console” on page 149 – “Planning for migration or upgrade from previous versions” on page 155 – “Planning for maintenance or upgrades” on page 156 110 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 127. 3.1 Different ways to do end-to-end scheduling The ability to connect mainframe and distributed platforms into an integrated scheduling network is not new. Several years ago, IBM offered two methods: By use of Tivoli OPC tracker agents With tracker agents, the Tivoli Workload Scheduler for z/OS can submit and monitor jobs on remote tracker agents. The tracker agent software had limited support for diverse operating systems. Also tracker agents were not fault-tolerant, so if the network went down, tracker agents would not continue to run. Furthermore, the scalability for tracker agents was not good, which means that it simply was not possible to get a stable environment for large distributed environments with several hundreds of tracker agents. By use of Tivoli Workload Scheduler MVS extended agents Using extended agents, Tivoli Workload Scheduler can submit and monitor mainframe jobs in (for example) OPC or JES. The extended agents had limited functionality and were not fault tolerant. This required a Tivoli Workload Scheduler master and was not ideal for large, established MVS workloads. Extended agents, though, can be a perfectly viable solution for a large Tivoli Workload Scheduler that needs to run few jobs in a z/OS mainframe environment. From Tivoli Workload Scheduler 8.1, it was possible to integrate Tivoli Workload Scheduler agents with Tivoli Workload Scheduler for z/OS, so Tivoli Workload Scheduler for z/OS was the master doing scheduling and tracking for jobs in the mainframe environment as well as in the distributed environment. The end-to-end scheduling feature of Tivoli Workload Scheduler 8.1 was the first step toward a complete unified system. The end-to-end solution has been optimized in Tivoli Workload Scheduler 8.2 where the integration between the two products, Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS is even tighter. Furthermore, some of the functions that were missing in the first Tivoli Workload Scheduler 8.1 solution have been added in the Version 8.2 end-to-end solution. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 111
  • 128. 3.2 The rationale behind end-to-end scheduling As described in Section 2.3.6, “Benefits of end-to-end scheduling” on page 86, you can gain several benefits by using Tivoli Workload Scheduler for z/OS end-to-end scheduling. To review: You can use fault-tolerant agents so that distributed job scheduling is more independent from problems with network connections and poor network performance. You can schedule workload on additional operation systems such as Linux and Windows 2000. You have a seamless synchronization of work in mainframe and distributed environments. Making dependencies between mainframe jobs and jobs in distributed environments is straightforward, using the same terminology and known interfaces. Tivoli Workload Scheduler for z/OS can use multi-tier architecture with Tivoli Workload Scheduler domain managers. You get extended planning capabilities, such as the use of long-term plans, trial plans, and extended plans, as well as to the distributed Tivoli Workload Scheduler network. Extended plans means that the current plan can span more than 24 hours. The powerful run-cycle and calendar functions in Tivoli Workload Scheduler for z/OS can be used for distributed Tivoli Workload Scheduler jobs. Besides these benefits, using the Tivoli Workload Scheduler for z/OS end-to-end also makes it possible to: Reuse or reinforce the procedures and processes that are established for the Tivoli Workload Scheduler for z/OS mainframe environment. Operators, planners, and administrators who are trained and experienced in managing Tivoli Workload Scheduler for z/OS workload can reuse their skills and knowledge in the distributed jobs managed by the Tivoli Workload Scheduler for z/OS end-to-end. Extend disciplines established to manage and operate workload scheduling in mainframe environments, to the distributed environment. Extend procedures for a contingency established for the mainframe environment to the distributed environment. Basically, when we look at end-to-end scheduling in this book, we consider scheduling in the enterprise (mainframe and distributed) where the Tivoli Workload Scheduler for z/OS engine is the master. 112 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 129. 3.3 Before you start the installation The short version of this story is: “Get the right people on board.” End-to-end scheduling with Tivoli Workload Scheduler is not complicated to implement, but it is important to understand that end-to-end scheduling can involve many different platforms and operating systems, will use IP communication, can work across firewalls, and uses SSL communication. As described earlier in this book, end-to-end scheduling involves two products: Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS. These products must be installed and configured to work together for successful end-to-end scheduling. Tivoli Workload Scheduler for z/OS is installed in the z/OS mainframe environment, and Tivoli Workload Scheduler is installed on the distributed platforms where job scheduling is going to be performed. We suggest that you establish an end-to-end scheduling team or project group that includes people who are skilled in the different platforms and operating systems. Ensure that you have skilled people who know how IP communication, firewalls, and SSL work in the different environments and can configure these components to work in them. The team will be responsible for doing the planning, installation, and operation of the end-to-end scheduling environment, must be able to cooperate across department boundaries, and must understand the entire scheduling environment, both mainframe and distributed. Tivoli Workload Scheduler for z/OS administrators should be familiar with the domain architecture and the meaning of fault tolerant in order to understand that, for example, the script is not necessarily located in the job repository database. This is essential when it comes to reflecting the end-to-end network topology in Tivoli Workload Scheduler for z/OS. On the other hand, people who are in charge of Tivoli Workload Scheduler need to know the Tivoli Workload Scheduler for z/OS architecture to understand the new planning mechanism and Symphony file creation. Another important thing to plan for is education or skills transfer to planners and operators who will have the daily responsibilities of end-to-end scheduling. If your planners and operators are knowledgeable, they will be able to work more independently with the products and you will realize better quality. We recommend that all involved people (mainframe and distributed scheduling) become familiar with both scheduling environments as described throughout this book. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 113
  • 130. Because end-to-end scheduling can involve different platforms and operating systems with different interfaces (TSO/ISPF on mainframe, command prompt on UNIX, and so forth) we also suggest planning to deploy of the Job Scheduling Console. The reason is that the JSC provides a unified and platform-independent interface to job scheduling, and users do not need detailed skills to handle or use interfaces that depend on a particular operating system. 3.3.1 How to order the Tivoli Workload Scheduler software The Tivoli Workload Scheduler solution consists of three products: IBM Tivoli Workload Scheduler for z/OS (formerly called Tivoli Operations Planning and Control, or OPC) Focused on mainframe-based scheduling Tivoli Workload Scheduler (formerly called Maestro) Focused on open systems–based scheduling and can be used with the mainframe-based products for a comprehensive solution across both distributed and mainframe environments Tivoli Workload Scheduler for Applications Enables direct, easy integration between the Tivoli Workload Scheduler and enterprise applications such as Oracle E-business Suite, PeopleSoft, and SAP R/3. Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS can be ordered independently or together in one program suite. The JSC graphical user interface is delivered together with Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler. This is also the case for the connector software that makes it possible for the JSC to communicate with either Tivoli Workload Scheduler for z/OS or Tivoli Workload Scheduler. Example 3-1 shows each product and its included components. Table 3-1 Product and components Components IBM Tivoli Tivoli Workload Tivoli Workload Workload Scheduler 8.2 Scheduler 8.2 Scheduler for for Applications z/OS 8.2 z/OS engine (OPC X Controller and Tracker) Tracker agent enabler X End-to-end enabler X 114 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 131. Components IBM Tivoli Tivoli Workload Tivoli Workload Workload Scheduler 8.2 Scheduler 8.2 Scheduler for for Applications z/OS 8.2 Tivoli Workload X Scheduler distributed (Maestro) Tivoli Workload X Scheduler Connector IBM Tivoli Workload X Scheduler for z/OS Connector Job Scheduling Console X X IBM Tivoli Workload X Scheduler for Applications for z/OS (Tivoli Workload Scheduler extended agent for z/OS) Note that the end-to-end enabler component (FMID JWSZ203) is used to populate the base binary directory in an HFS during System Modification Program/Extended (SMP/E) installation. The tracker agent enabler component (FMID JWSZ2C0) makes it possible for the Tivoli Workload Scheduler for z/OS controller to communicate with old Tivoli OPC distributed tracker agents. Attention: The Tivoli OPC distributed tracker agents went out of support October 31, 2003. To be able to use the end-to-end scheduling solution you should order both products: IBM Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler. In the following section, we list the ordering details. Contact your IBM representative if you have any problems ordering the products or are missing some of the delivery or components. Software ordering details Table 3-2 on page 116 shows ordering details for Tivoli Workload Scheduler for z/OS and Tivoli Workload Scheduler. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 115
  • 132. Table 3-2 Ordering details Component IBM Tivoli IBM Tivoli Tivoli Workload Delivery Workload Workload Scheduler 8.2 Comments Scheduler for Scheduler for Program number z/OS 8.2 z/OS Host Edition z/OS Engine Yes, optional Yes z/OS Agent Yes, optional Yes End-to-end Yes, optional Yes Enabler Distributed FTA Yes JSC Yes Yes Yes Delivery Native tape, ServicePac® or CD-ROM for all Service Pack or CBPDO distributed CBPDO platforms Comments The 3 z/OS All 3 z/OS components can components are be licensed and included when delivered customer buy and individually get deliver Program number 5697-WSZ 5698-WSH 5698-A17 3.3.2 Where to find more information for planning Besides this redbook, you can find more information in IBM Tivoli Workload Scheduling Suite General Information Version 8.2, SC32-1256. This manual is a good place to start to learn more about Tivoli Workload Scheduler, Tivoli Workload Scheduler for z/OS, the JSC, and end-to-end scheduling. 3.4 Planning end-to-end scheduling with Tivoli Workload Scheduler for z/OS Before installing the Tivoli Workload Scheduler for z/OS and activating the end-to-end scheduling feature, there are several areas to consider and plan for. These areas are described in the following sections. 116 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 133. 3.4.1 Tivoli Workload Scheduler for z/OS documentation Tivoli Workload Scheduler for z/OS documentation is not shipped in hardcopy form with IBM Tivoli Workload Scheduler for z/OS 8.2. The books are available in PDF and IBM softcopy format and delivered on a CD-ROM with the Tivoli Workload Scheduler for z/OS product. The CD-ROM has part number SK2T-6951 and can also be ordered separately. Several of the Tivoli Workload Scheduler for z/OS books have been updated or revised starting in April 2004. This means that the books that are delivered with the base product are outdated, and we strongly suggest that you confirm that you have the newest versions of the books before starting the installation. This is true even for Tivoli Workload Scheduler for z/OS 8.2. Note: The publications are available for download in PDF format at: http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html Look for books marked with “Revised April 2004,” as they have been updated with changes introduced by service (APARs and PTFs) for Tivoli Workload Scheduler for z/OS produced after the base version of the product was released in June 2003. We recommend that you have access to, and possibly print, the newest versions of the Tivoli Workload Scheduler for z/OS publications before starting the installation. Tivoli OPC tracker agents Although the distributed Tivoli OPC tracker agents are not supported and cannot be ordered any more, Tivoli Workload Scheduler for z/OS 8.2 can still communicate with these tracker agents, because the agent enabler software (FMID JWSZ2C0) is delivered with Version 8.2. However, the Version 8.2 manuals do not describe the related TCP or APPC ROUTOPTS initialization statement parameters. If you are going to use Tivoli OPC tracker agents with Version 8.2, then save the related Tivoli OPC publications, so you can use them for reference when necessary. 3.4.2 Service updates (PSP bucket, APARs, and PTFs) Before starting the installation, be sure to check the service level for the Tivoli Workload Scheduler for z/OS that you have received from IBM, and make sure that you get all available service so it can be installed with Tivoli Workload Scheduler for z/OS. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 117
  • 134. Because the period from the time that installation of Tivoli Workload Scheduler for z/OS is started until it is activated in your production environment can be several months, we suggest that the installed Tivoli Workload Scheduler for z/OS be updated with all service that is available at the installation time. Preventive service planning (PSP) The Program Directory that is provided with your Tivoli Workload Scheduler for z/OS distribution tape is an important document that may include technical information that is more recent than the information provided in this section. It also describes the program temporary fix (PTF) level of the Tivoli Workload Scheduler for z/OS licensed program when it was shipped from IBM, and contains instructions for unloading the software and information about additional maintenance for your level of the received distribution tape for the z/OS installation. Before you start installing Tivoli Workload Scheduler for z/OS, check the preventive service planning bucket for recommendations that may have been added by the service organizations after your Program Directory was produced. The PSP includes a recommended service section that includes high-impact or pervasive (HIPER) APARs. Ensure that the corresponding PTFs are installed before you start to customize a Tivoli Workload Scheduler for z/OS subsystem. Table 3-3 gives the PSP information for Tivoli Workload Scheduler for z/OS to be used when ordering the PSP bucket. Table 3-3 PSP upgrade and subset ID information Upgrade Subset Description TWSZOS820 HWSZ200 Agent for z/OS JWSZ202 Engine (Controller) JWSZ2A4 Engine English NLS JWSZ201 TCP/IP communication JWSZ203 End-to-end enabler JWSZ12C0 Agent enabler Important: If you are running a previous version of IBM Tivoli Workload Scheduler for z/OS or OPC on a system where the JES2 EXIT2 was assembled using the Tivoli Workload Scheduler for z/OS 8.2 macros, apply the following PTFs to avoid job tracking problems due to missing A1 and A3P records: Tivoli OPC 2.3.0: Apply UQ66036 and UQ68474. IBM Tivoli Workload Scheduler for z/OS 8.1: Apply UQ67877. 118 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 135. Important service for Tivoli Workload Scheduler for z/OS Besides the APARs and PTFs that are listed in the PSP bucket, we suggest that you plan to apply all available service for Tivoli Workload Scheduler for z/OS in the installation phase. At the time of writing this book, we found several important APARs for Tivoli Workload Scheduler for z/OS end-to-end scheduling and have listed some of them in Table 3-4. The table also shows whether the corresponding PTFs were available when this book was written (the number in the PTF number column). Note: The APAR list in Table 3-4 is not complete, but is used to give some examples of important service to apply during the installation. As mentioned before, we strongly suggest that you apply all available service during your installation of Tivoli Workload Scheduler for z/OS. Table 3-4 Important service APAR number PTF number Description PQ76474 UQ81495 Checks for number of dependencies for FTW UQ81498 job and two new messages, EQQX508E and EQQ3127E, to indicate that FTW job cannot be added to AD or CP (Symphony file) due to more than 40 dependencies for this job. PQ77014 UQ81476 During the daily planning or Symphony renew, UQ81477 the batch job ends with RC=0 even though warning messages have been issued for the Symphony file. PQ77535 Not available Important documentation with additional Doc. APAR information when creating and maintaining HFS files needed for Tivoli Workload Scheduler end-to-end processing. PQ77970 UQ82583 Makes it possible to customize the job name in UQ82584 the Symphony file. UQ82585 Before the fix, the job name was always UQ82587 generated using the operation number and UQ82579 occurrence name. Now it can be customized. UQ82601 The EQQPDFXJ member in the SEQQMISC UQ82602 library holds a detailed description (see Chapter 4, “Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling” on page 157 for more information). Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 119
  • 136. APAR number PTF number Description PQ78043 UQ81567 64M is recommended as the minimum region size for an E2E server; however, the sample server JCL (member EQQSER in SEQQSAMP) still has REGION=6M. This should be changed to REGION=64M. PQ78097 Not available Better documentation of the WSSTAT Doc. APAR MANAGES keyword. PQ78356 UQ82697 When a job stream is added via MCP to the Symphony file, it is always added with the GMT time; therefore in this case the local timezone set for the FTA is completely ignored. PQ78891 UQ82790 Introduces new messages in the server UQ82791 message log when USS processes end UQ82784 abnormally or unexpectedly. Important for UQ82793 monitoring of the server and USS processes. UQ82794 Updates server related messages in controller message log to be more precise. PQ79126 Not available In the documentation, any reference to ZFS Doc. APAR files is missing. The Tivoli Workload Scheduler end-to-end server fully supports and can access UNIX System Services (USS) in a Hierarchical File System (HFS) or in a zSeries® File System (zFS) cluster. PQ79875 Not available If you have any fault-tolerant workstations on Doc. APAR Windows supported platforms and you want to run jobs on these workstations, you must create a member containing all users and passwords for Windows users who need to schedule jobs to run on Windows workstations. The Windows users are described using USRREC initialization statements. PQ80229 Not available In the IBM Tivoli Workload Scheduler for z/OS Doc. APAR Installation Guide, the description of the end-to-end Input and Output Events Data Sets (EQQTWSIN and EQQTWSOU) is misleading because it states that the LRECL for these files can be anywhere from 120 to 32000 bytes. In reality, the LRECL must be 120. Defining a larger LRECL causes a waste of disk space, which can lead to problems if the EQQTWSIN and EQQTWSOU files fill up completely. Also see text in APAR PQ77970. 120 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 137. APAR number PTF number Description PQ80341 UQ88867 End-to-end: Missing synchronization process UQ88868 between event manager and receiver tasks at UQ88869 Controller startup. Several new messages are introduced by this APAR (documented in EQQPDFEM member in SEQQMISC data). PQ81405 UQ82765 Checks for number of dependencies for FTW UQ82766 job and new message, EQQG016E, to indicate that FTW job cannot be added to CP due to more than 40 dependencies for this job. PQ84233 UQ87341 Implements support for Tivoli Workload UQ87342 Scheduler for z/OS commands: NP (NOP), UN UQ87343 (UN-NOP), EX (Execute), and for the “submit” UQ87344 automatic option for operations defined on UQ87345 fault-tolerant workstations. UQ87377 Also introduces a new TOPOLOGY NOPTIMEDEPENDENCY (YES/NO) parameter. PQ87120 UQ89138 Porting of Tivoli Workload Scheduler 8.2 FixPack 04 to end-to-end feature on z/OS. With this APAR the Tivoli Workload Scheduler for z/OS 8.2 end-to-end code has been aligned with the Tivoli Workload Scheduler distributed code FixPack 04 level. This APAR also introduces the Backup Domain Fault Tolerant feature in the end-to-end environment. PQ87110 UQ90485 The Tivoli Workload Scheduler end-to-end UQ90488 server is not able to get mutex lock if mountpoint of a shared HFS is moved without stopping the server. Also it contains a very important documentation update that describes how to configure the end-to-end server work directory correctly in an sysplex environment with hot stand-by controllers. Note: To learn about updates to the Tivoli Workload Scheduler for z/OS books and the APARs and PTFs that pre-date April 2004, consult “April 2004 Revised” versions of the books, as mentioned in 3.4.1, “Tivoli Workload Scheduler for z/OS documentation” on page 117. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 121
  • 138. Special documentation updates introduced by service Some APARs were fixed on Tivoli Workload Scheduler for z/OS 8.1 while the general availability code for Tivoli Workload Scheduler for z/OS 8.2 was frozen because of shipment. All these fixes or PTFs are sysrouted through level set APAR PQ74854 (also described as hiper cumulative APAR). This cumulative APAR is meant to align the Version 8.2 code with the maintenance level that was reached during the time the GA code was frozen. With APAR PQ74854, the documentation has been updated and is available in a PDF file. To access the changes described in this PDF file: Apply the PTF for APAR PQ74854. Transfer the EQQPDF82 member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be pdf. Read the document using Adobe (Acrobat) Reader. APAR PQ77970 (see Table 3-4 on page 119) makes it possible to customize how the job name in the Symphony file is generated. The PTF for APAR PQ77970 installs a member, EQQPDFXJ, in the SEQQMISC library. This member holds a detailed description of how the job name in the Symphony file can be customized and how to specify the related parameters. To read the documentation in the EQQPDFXJ member: Apply the PTF for APAR PQ77970. Transfer the EQQPDFXJ member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be pdf. Read the document using Adobe Reader. APAR PQ84233 (see Table 3-4 on page 119) implements support for Tivoli Workload Scheduler for z/OS commands for fault-tolerant agents and introduces a new TOPLOGY NOPTIMEDEPENDENCY(Yes/No) parameter. The PTF for APAR PQ84233 installs a member, EQQPDFNP, in the SEQQMISC library. This member holds a detailed description of the supported commands and the NOPTIMEDEPENDENCY parameter. To read the documentation in the EQQPDFNP member: Apply the PTF for APAR PQ84233. Transfer the EQQPDFNP member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be pdf. Read the document using Adobe Reader. 122 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 139. Note: The documentation updates that are described in the EQQPDF82, EQQPDFXJ, and EQQPDFNP members in SEQQMISC are in the “April 2004 Revised” versions of the Tivoli Workload Scheduler for z/OS books, mentioned in 3.4.1, “Tivoli Workload Scheduler for z/OS documentation” on page 117. APAR PQ80341 (see Table 3-4 on page 119) improves the synchronization process between the controller event manager and receiver tasks. The APAR also introduces several new or updated messages. The PTF for APAR PQ80341 installs a member, EQQPDFEM, in the SEQQMISC library. This member holds a detailed description of the new or updated messages related to the improved synchronization process. To read the documentation in the EQQPDFEM member: Apply the PTF for APAR PQ80341. Transfer the EQQPDFEM member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be.pdf. Read the document using Adobe Reader. APAR PQ87110 (see Table 3-4 on page 119) contains important documentation updates with suggestions on how to define the end-to-end server work directory in a SYSPLEX shared HFS environment and a procedure to be followed before starting a scheduled shutdown for a system in the sysplex. The PTF for APAR PQ87110 installs a member, EQQPDFSY, in the SEQQMISC library. This member holds the documentation updates. To read the documentation in the EQQPDFSY member: Apply the PTF for APAR PQ87110. Transfer the EQQPDFEM member from the SEQQMISC library on the mainframe to a file on your personal workstation. Remember to transfer using the binary transfer type. The file extension must be .pdf. Read the document using Adobe Reader. 3.4.3 Tivoli Workload Scheduler for z/OS started tasks for end-to-end scheduling As described in the architecture chapter, end-to-end scheduling involves at least two started tasks: the Tivoli Workload Scheduler for z/OS controller and the Tivoli Workload Scheduler for z/OS server. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 123
  • 140. The server started task will do all communication with the distributed fault-tolerant agents and will handle updates (for example, to the Symphony file). The server task must always run on the same z/OS system as the active controller task. In Tivoli Workload Scheduler for z/OS 8.2, it is possible to configure one server started task that can handle end-to-end scheduling, communication with JSC users, and APPC communication. Even though it is possible to configure one server started task, we strongly suggest using a dedicated server started task for the end-to-end scheduling. Using dedicated started tasks with dedicated responsibilities makes it possible, for example, to restart the JSC server started task without any impact on the scheduling in the end-to-end server started task. Although it is possible to run end-to-end scheduling with the Tivoli Workload Scheduler for z/OS ISPF interface, we suggest that you plan for use of the Job Scheduling Console (JSC) graphical user interface. Users with background in the distributed world will find the JSC much easier to use than learning a new interface such as TSO/ISPF to manage their daily work. Therefore we suggest planning for implementation of a server started task that can handle the communication with the JSC Connector (JSC users). 3.4.4 Hierarchical File System (HFS) cluster Terminology note: An HFS data set is a z/OS data set that contains a POSIX-compliant hierarchical file system, which is a collection of files and directories organized in a hierarchical structure that can be accessed using the z/OS UNIX system services (USS). Tivoli Workload Scheduler code has been ported into UNIX System Services (USS) on z/OS. When planning for the end-to-end scheduling with Tivoli Workload Scheduler for z/OS, keep in mind that the server starts multiple tasks and processes using the USS in z/OS. The end-to-end server accesses the code delivered from IBM and creates several work files in Hierarchical File System clusters. Because of this, the z/OS USS function must be active in the z/OS environment before you can install and use the end-to-end scheduling feature in Tivoli Workload Scheduler for z/OS. 124 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 141. The Tivoli Workload Scheduler code is installed with SMP/E in an HFS cluster in USS. It can be installed in an existing HFS cluster or in a dedicated HFS cluster, depending on how the z/OS USS is configured. Besides the installation binaries delivered from IBM, the Tivoli Workload Scheduler for z/OS server also needs several work files in a USS HFS cluster. We suggest that you use a dedicated HFS cluster to the server workfiles. If you are planning to install several Tivoli Workload Scheduler for z/OS end-to-end scheduling environments, you should allocate one USS HFS cluster for workfiles per end-to-end scheduling environment. Furthermore, if the z/OS environment is configured as a sysplex, where the Tivoli Workload Scheduler for z/OS server can be active on different z/OS systems within the sysplex, you should make sure that the USS HFS clusters with Tivoli Workload Scheduler for z/OS binaries and workfiles can be accessed from all of the sysplex’s systems. Starting from OS/390 Version 2 Release 9, it is possible to mount USS HFS clusters either in read-only mode or in read/write mode on all systems in a sysplex. The USS HFS cluster with the Tivoli Workload Scheduler for z/OS binaries should then be mounted in read mode on all systems and the USS HFS cluster with the Tivoli Workload Scheduler for z/OS work files should be mounted in read/write mode on all systems in the sysplex. Figure 3-1 on page 126 illustrates the use of dedicated HFS clusters for two Tivoli Workload Scheduler for z/OS environments: test and production. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 125
  • 142. Workfiles Production environment for server: HFS DSN: OMVS.TWSCPROD.HFS Mount point: /TWS/TWSCPROD Mounted Read/Write (on all systems) * WRKDIR('/TWS/TWSCPROD') Installation Binaries * BINDIR('/TWS/PROD/bin820’) HFS DSN: OMVS.PROD.TWS820.HFS Mount point: /TWS/PROD/bin820 Mounted Read Only (on all systems) Workfiles Test environment for server: HFS DSN: OMVS.TWSCTEST.HFS Mount point: /TWS/TWSCTEST Mounted Read/Write (on all systems) * WRKDIR('/TWS/TWSCTEST') Installation Binaries * BINDIR('/TWS/TEST/bin820’) HFS DSN: OMVS.TEST.TWS820.HFS Mount point: /TWS/TEST/bin820 Mounted Read Only (on all systems) Figure 3-1 Dedicated HFS clusters for Tivoli Workload Scheduler for z/OS server test and production environment Note: IBM Tivoli Workload Scheduler for z/OS 8.2 supports zFS (z/OS File System) clusters as well as HFS clusters (APAR PQ79126). Because zFS clusters offers significant performance improvements over HFS, we suggest considering use of zFS clusters instead of HFS clusters. For this redbook, we used HFS clusters in our implementation. We recommend that you create a separate HFS cluster for the working directory, mounted in read/write mode. This is because the working directory is application specific and contains application-related data. It also makes your backup easier. The size of the cluster depends on the size of the Symphony file and how long you want to keep the stdlist files. We recommend starting with at least 2 GB of space. We also recommend that you plan to have separate HFS clusters for the binaries if you have more than one Tivoli Workload Scheduler end-to-end scheduling environment, as shown in Figure 3-1. This makes it possible to apply and test maintenance and test it in the test environment before it is populated to the production environment. 126 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 143. As mentioned earlier, OS/390 2.9 and higher support use of shared HFS clusters. Some directories (usually /var, /dev, /etc, and /tmp) are system specific, meaning that those paths are logical links pointing to different directories. When you specify the work directory, make sure that it is not on a system-specific filesystem. Or, if this is the case, make sure that the same directories on the filesystem of the other systems are pointing to the same directory. For example, you can use /u/TWS; that is not system-specific. Or you can use /var/TWS on system SYS1 and create a symbolic link /SYS2/var/TWS to /SYS1/var/TWS so that /var/TWS will point to the same directory on both SYS1 and SYS2. If you are using OS/390 versions earlier than Version 2.9 in a sysplex, the HFS cluster with the work files and binaries should be mounted manually on the system where the server is active. If the server is going to be moved to another system in the sysplex, the HFS clusters should be unmounted from the first system and mounted on the system where the server is going to be active. On the new system, the HFS cluster with work files should be mounted in read/write mode, and the HFS cluster with the binaries should be mounted in read mode. The filesystem can be mounted in read/write mode on only one system at a time. Note: Please also check documentation updates in APAR PQ87110 (see table 3-4 on page 118) if you are planning to use shared HFS with work directory for the end-to-end server. The PTFs for this APAR contain important documentation updates with suggestions on how to define the end-to-end server work directory in a SYSPLEX shared HFS environment and a procedure to be followed before starting a scheduled shutdown for a system in the sysplex. Migrating from IBM Tivoli Workload Scheduler for z/OS 8.1 If you are migrating from Tivoli Workload Scheduler for z/OS 8.1 to Tivoli Workload Scheduler for z/OS 8.2 and you are using end-to-end scheduling in the 8.1 environment, we suggest that you allocate new dedicated USS HFS clusters for the Tivoli Workload Scheduler for z/OS 8.2 work files and installation binaries. 3.4.5 Data sets related to end-to-end scheduling Tivoli Workload Scheduler for z/OS has several data sets that are dedicated for end-to-end scheduling: End-to-end input and output data sets (EQQTWSIN and EQQTWSOU). These data sets are used to send events from controller to server and from server to controller. They must be defined in the controller and end-to-end server started task procedure. Current plan backup copy data set to create Symphony (EQQSCPDS). This is a VSAM data set used as a CP backup copy for the production of the Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 127
  • 144. Symphony file in USS. It must be defined in controller started task procedure and in the current plan extend job, the current plan replan job, and the Symphony renew job. End-to-end script library (EQQSCLIB) is a partitioned data set that holds commands or job definitions for fault-tolerant agent jobs. This must be defined in the controller started task procedure and in the current plan extend job, current plan replan job, and the Symphony renew job. End-to-end centralized script data set (EQQTWSCS). This is a partitioned data set that holds scripts for fault-tolerant agents jobs while they are sent to the agent. It must be defined in controller and end-to-end server started task. Plan for the allocation of these data sets and remember to specify the data sets in controller and end-to-end server started task procedures as required as well as in the current plan extend job, the replan job, and the Symphony renew job as required. In the planning phase you should also consider whether your installation will use centralized scripts, non-centralized (local) scripts, or a combination of centralized and non-centralized scripts. Non-centralized (local) scripts – In Tivoli Workload Scheduler for z/OS 8.2, it is possible to have job definitions in the end-to-end script library and have the script (the job) be executed on the fault-tolerant agent. This is referred to as a non-centralized script. – Using non-centralized scripts makes it possible for the fault-tolerant agent to run local jobs without any connection to the controller on mainframe. – On the other hand, if the non-centralized script should be updated, it must be done locally on the agent. – Local placed scripts can be consolidated in a central repository placed on the mainframe or on a fault-tolerant agent; then on a daily basis, changed or updated scripts can be distributed to the FTAs where they will be executed. By doing this, you can keep all scripts in a common repository. This facilitates easy modification of scripts, because you only have to change the scripts in one place. We recommend this option because it gives most of the benefits of using centralized scripts without sacrificing fault tolerance. Centralized scripts – Another possibility in Tivoli Workload Scheduler for z/OS 8.2 is to have the scripts on the mainframe. The scripts will then be defined in the controller job library and, via the end-to-end server, the controller will send the script to the fault-tolerant agent when jobs are ready to run. 128 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 145. – This makes it possible to centrally manage all scripts. – However, it compromises the fault tolerance in the end-to-end scheduling network, because the controller must have a connection to the fault-tolerant agent to be able to send the script. – The centralized script function makes migration from Tivoli OPC tracker agents with centralized scripts to end-to-end scheduling much simpler. Combination of non-centralized and centralized scripts – The third possibility is to use a combination of non-centralized and centralized scripts. – Here the decision can be made based on such factors as: • Where a particular FTA is placed in the network • How stable the network connection is to the FTA • How fast the connection is to the FTA • Special requirements for different departments to have dedicated access to their scripts on their local FTA – For non-centralized scripts, it is still possible to have a centralized repository with the scripts and then, on a daily basis, to distribute changed or updated scripts to the FTAs with non-centralized scripts. 3.4.6 TCP/IP considerations for end-to-end server in sysplex In Tivoli Workload Scheduler end-to-end scheduling, the TCP/IP protocol is used to communicate between the end-to-end server task and the domain managers at the first level. The fault-tolerant architecture in the distributed network has the advantage that the individual FTAs in the distributed network can continue their own processing during a network failure. If there is no connection to the controller on the mainframe, the domain managers at the first level will buffer their events in a local file called tomaster.msg. This buffering continues until the link to the end-to-end server is re-established. If there is no connection between the domain managers at the first level and the controller on the mainframe side, dependencies between jobs on the mainframe and jobs in the distributed environment cannot be resolved. You cannot schedule these jobs before the connection is re-established. If the connection is down when, for example, a new plan is created, this new plan (the new Symphony file) will not be distributed to the domain managers at the first level and further down in the distributed network. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 129
  • 146. In the planning phase, consider what can happen: When the z/OS system with the controller and server tasks fails When the controller or the server task fails When the z/OS system with the controller has to be stopped for a longer time (for example, due to maintenance). The goal is to make the end-to-end server task and the controller task as fail-safe as possible and to make it possible to move these tasks from one system to another within a sysplex without any major disruption in the mainframe and distributed job scheduling. As explained earlier, the end-to-end server is a started task that must be running on the same z/OS system as the controller. The end-to-end server handles all communication with the controller task and the domain managers at the first level in the distributed Tivoli Workload Scheduler distributed network. One of the main reasons to configure the controller and server task in a sysplex environment is to make these tasks as fail-safe as possible. This means that the tasks can be moved from one system to another within the same sysplex without any stop in the batch scheduling. The controller and server tasks can be moved as part of planned maintenance or in case a system fails. Handling of this process can be automated and made seamless for the user by using the Tivoli Workload Scheduler for z/OS Hot Standby function. The problem running end-to-end in a z/OS sysplex and trying to move the end-to-end server from one system to another is that the end-to-end server by default gets the IP address from the TCP/IP stack of the z/OS system where it is started. If the end-to-end server is moved to another z/OS system within the sysplex it normally gets another IP address (Figure 3-2). 130 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 147. IP-address 2 Standby Standby Controller Controller Active Controller Standby z/OS Server Controller SYSPLEX z/OS Active SYSPLEX Controller Server Standby IP-address 1 Controller 1. Active controller and server on 2. Active Engine is moved to another one z/OS system in the sysplex system in the z/OS sysplex. Server has a “system Server gets a new “system dependent” dependent” IP-address IP-address. Can cause problems for FTA connections because IP address is in Symphony file Figure 3-2 Moving one system to another within a z/OS sysplex When the end-to-end server starts, it looks in the topology member to find its host name or IP address and port number. In particular, the host name or IP address is: Used to identify the socket from which the server receives and sends data from and to the distributed agents (domain managers at the first level) Stored in the Symphony file and is recognized by the distributed agents as the IP address (or host name) of the master domain manager (OPCMASTER) If the host name is not defined or the default is used, the end-to-end server by default will use the host name that is returned by the operating system (that is, the host name returned by the active TCP/IP stack on the system). The port number and host name will be inserted in the Symphony file when a current plan extend or replan batch job is submitted or a Symphony renew is initiated in the controller task. The Symphony file will then be distributed to the domain managers at the first level. The domain managers at the first level in turn use this information to link back to the server. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 131
  • 148. MASTERDM z/OS Sysplex Standby Standby Controller Active Controller z/OS Controller & wtsc64 z/OS Server 9.12.6.9 z/OS wtsc63 wtsc65 9.12.6.8 9.12.6.10 SSL UK Europe Nordic Domain AIX Domain Windows 2000 Domain AIX Manager london Manager geneva Manager stockholm U000 9.3.4.63 E000 9.3.4.185 N000 9.3.4.47 Firewall & Router FTA AIX FTA W2K FTA AIX FTA W2K FTA W2K FTA Linux U001 U002 E001 E002 N001 N002 belfast edinburgh rome amsterdam oslo helsinki 9.3.4.64 9.3.4.188 9.3.4.122 9.3.4.187 10.2.3.184 10.2.3.190 FTA W2K N003 copenhagen 10.2.3.189 Figure 3-3 First-level domain managers connected to Tivoli Workload Scheduler for z/OS server in z/OS sysplex If the z/OS controller fails on the wtsc64 system (see Figure 3-3), the standby controller either on wtsc63 or wtsc65 can take over all of the engine functions (run the controller and the end-to-end server tasks). Which controller takes over depends on how the standby controllers are configured. The domain managers of first level (london, geneva, and stockholm in Figure 3-3 on page 132) know wtsc64 as their master domain manager (from the Symphony file), so the link from the domain managers to the end-to-end server will fail, no matter which system (wtsc63 or wtsc65) the controller takes over on. One solution could be to send a new Symphony file (renew the Symphony file) from the controller and server that has taken over the domain managers of first level. Doing a renew of the Symphony file on the new controller and server recreates the Symphony file and adds the new z/OS host name or IP address (read from the topology definition or returned by the z/OS operating system) to the Symphony file. The domain managers then use this information to reconnect to the server on the new z/OS system. Since renewing the Symphony file can be disruptive, especially in a heavily loaded scheduling environment, we explain three alternative strategies that can 132 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 149. be used to solve the reconnection problem after the server and controller have been moved to another system in a sysplex. For all three alteratives, the topology member is used to specify the host name and port number for the Tivoli Workload Scheduler for z/OS server task. The host name is copied to the Symphony file when the Symphony file is renewed or the Tivoli Workload Scheduler for z/OS current plan is extended or replanned. The distributed domain managers at the first level will use the host name read from the Symphony file to connect to the end-to-end server. Because the first-level domain managers will try to link to the end-to-end server using the host name that is defined in the server hostname parameter, you must take the required action to successfully establish a reconnection. Make sure that the host name always resolves correctly to the IP address of the z/OS system with the active end-to-end server. This can be acquired in different ways. In the following three sections, we have described three different ways to handle the reconnection problem when the end-to-end server is moved from one system to another in the same sysplex. Use of the host file on the domain managers at the first level To be able to use the same host name after a fail-over situation (where the engine is moved to one of its backup engines) and gain additional flexibility, we will use a host name that always can be resolved to the IP address of the z/OS system with the active end-to-end server. The resolution of the host name is done by the first-level domain managers using their local host files to get the IP address of the z/OS system with the end-to-end server. In the end-to-end server topology we can define a host name with a given name (such as TWSCE2E). This host name will be associated with an IP address by the TCP/IP stack, for example in the USS /etc/hosts file, where the end-to-end server is active. The different IP addresses of the systems where the engine can be active are defined in the host name file (/etc/hosts on UNIX) on the domain managers at the first level, as in Example 3-1. Example 3-1 hosts file 9.12.6.8 wtsc63.itso.ibm.com 9.12.6.9 wtsc64.itso.ibm.com TWSCE2E 9.12.6.10 wtsc65.itso.ibm.com If the server is moved to the wtsc63 system, you only have to edit the hosts file on the domain managers at the first level, so TWSCE2E now points to the new system as in Example 3-2. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 133
  • 150. Example 3-2 hosts file 9.12.6.8 wtsc63.itso.ibm.com TWSCE2E 9.12.6.9 wtsc64.itso.ibm.com 9.12.6.10 wtsc65.itso.ibm.com This change takes effect dynamically (the next time the domain manager tries to reconnect to the server). One major disadvantage with this solution is that the change must be carried out by editing a local file on the domain managers at the first level. A simple move of the tasks on mainframe will then involve changes on distributed systems as well. In our example in Figure 3-3 on page 132, the local host file should be edited on three domain managers at the first level (the london, geneva and stockholm servers). Furthermore, localopts nm ipvalidate must be set to none on the agent, because the node name and IP address for the end-to-end server, which are stored for the OPCMASTER workstation (the workstation representing the end-to-end server) in the Symphony file on the agent, has changed. See the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273 for further information. Use of stack affinity on the z/OS system Another possibility is to use stack affinity to ensure that the end-to-end server host name resolves to the same IP address, even if the end-to-end server is moved to another z/OS system in the sysplex. With stack affinity, the end-to-end server host name will always be resolved using the same TCP/IP stack (the same TCP/IP started task) and hence always get the same IP address, regardless of which z/OS system the end-to-end server is started on. Stack affinity provides the ability to define which specific TCP/IP instance the application should bind to. If you are running in a multiple-stack environment in which each system has its own TCP/IP stack, the end-to-end server can be forced to use a specific stack, even if it runs on another system. A specific stack or stack affinity is defined in the Language Environment® variable: _BPXK_SETIBMOPT_TRANSPORT. To define environment variables in the end-to-end server, DD-name, STDENV should be added to the end-to-end server started task procedure. The STDENV DD-name can point to a sequential data set or a member in a partitioned dataset (for example, a member of the 134 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 151. end-to-end server PARMLIB) in which it is possible to define environment variables to initialize Language Environment. In this data set or member environment, variables can be specified as VARNAM=value. See IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264, for further information. For example: //STDENV DD DISP=SHR,DSN=MY.FILE.PARM(STDENV) This member can be used to set the stack affinity using the following environment variable. _BPXK_SETIBMOPT_TRANSPORT=xxxxx (xxxxx indicates the TCP/IP stack the end-to-end server should bind to.) One disadvantage of stack affinity is that a particular stack on a specific z/OS system is used. If this stack (the TCP/IP started task) or the z/OS system with this stack has to be stopped or requires an IPL, the end-to-end server that can run on another system will not be able to establish connections to the domain managers at the first level. If this happens, manual interaction is required. For more information, see the z/OS V1R2 Communications Server: IP Configuration Guide, SC31-8775. Use of Dynamic Virtual IP Addressing (DVIPA) DVIPA, which was introduced with OS/390 V2R8, makes it possible to assign a specific virtual IP address to a specific application. The configurations can be created to have this virtual IP address independent of any specific TCP/IP stack within the sysplex and dependent of the started application — that is, this IP address will be the same for the application no matter which system in the sysplex the application is started on. Even if your application has to be moved to another system because of failure or maintenance, the application can be reached under the same virtual IP address. Use of DVIPA is the most flexible way to be prepared for application or system failure. We recommend that you plan for use of DVIPA for the following Tivoli Workload Scheduler for z/OS components: Server started task used for end-to-end scheduling Server started task used for the JSC communication Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 135
  • 152. The Tivoli Workload Scheduler for z/OS end-to-end (and JSC) server has been improved for Version 8.2. This improvement makes better use of DVIPA for the end-to-end (and JSC) server than in Tivoli Workload Scheduler 8.1. In IBM Tivoli Workload Scheduler for z/OS 8.1, a range of IP addresses to be used by DVIPA (VIPARANGE) had to be defined, as did specific PORT and IP addresses for the end-to-end server (Example 3-3). Example 3-3 Some required DVIPA definitions for Tivoli Workload Scheduler for z/OS 8.1 VIPADYNAMIC viparange define 255.255.255.248 9.12.6.104 ENDVIPADYNAMIC PORT 5000 TCP TWSJSC BIND 9.12.6.106 31182 TCP TWSCE2E BIND 9.12.6.107 In this example, DVIPA automatically assigns started task TWSCE2E, which represents our end-to-end server task that is configured to use port 31182 and IP address 9.12.6.107. DVIPA is described in great detail in the z/OS V1R2 Communications Server: IP Configuration Guide, SC31-8775. In addition, the redbook TCP/IP in a Sysplex, SG24-5235, provides useful information for DVIPA. One major problem using DVIPA in the Tivoli Workload Scheduler for z/OS 8.1 end-to-end server was that the end-to-end server mailman process still used the IP address for the z/OS system (the local IP address for outbound connections was determined by the routing table on the z/OS system). If localops nm ipvalidate was set to full in the first-level domain manager or backup domain manager, the outbound connection from the end-to-end mailman server process to the domain manager netman was rejected by the domain manager netman process. The result was that the outbound connection could not be established when the end-to-end server was moved from one system in the sysplex to another. This is changed in Tivoli Workload Scheduler for z/OS 8.2, so the end-to-end sever will use the host name or IP address that is specified in the TOPOLOGY HOSTNAME parameter both for inbound and outbound connections. This has the following advantages compared to Version 8.1: 1. It is not necessary to define the end-to-end server started task in the static DVIPA PORT definition. It is sufficient to define the DVIPA VIPARANGE parameter. When the end-to-end server starts and reads the TOPOLOGY HOSTNAME() parameter, it performs a gethostbyname() on the host name. The host name can be related to an IP address (in the VIPARANGE), for example in the USS 136 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 153. /etc/hosts file. It then will get the same IP address across z/OS systems in the sysplex. A major advantage also if the host name or IP address is going to be changed, it is sufficient to make the change in the /etc/hosts file. It is not necessary to change the TCP/IP definitions and restart the TCP/IP stack (as long as the new IP address is within the defined range of IP addresses in the VIPARANGE parameter). 2. The host name in the TOPOLOGY HOSTNAME() parameter is used for outbound connections (from end-to-end server to the domain managers at the first level). 3. You can use network address IP validation on the domain managers at the first level. The advantages of 1 and 2 also apply to the JSC server. Example 3-4 shows required DVIPA definitions for Tivoli Workload Scheduler 8.2 in our environment. Example 3-4 Example of required DVIPA definitions for ITWS for z/OS 8.2 VIPADYNAMIC viparange define 255.255.255.248 9.12.6.104 ENDVIPADYNAMIC And the /etc/hosts file in USS looks like: 9.12.6.107 twsce2e.itso.ibm.com twsce2e Note: In the previous example, we show use of the /etc/hosts file in USS. For DVIPA, it is advisable to use the DNS instead of the /etc/hosts file because the /etc/hosts definitions in general are defined locally on each machine (each z/OS image) in the sysplex. 3.4.7 Upgrading from Tivoli Workload Scheduler for z/OS 8.1 end-to-end scheduling If you are running Tivoli Workload Scheduler for z/OS 8.1 end-to-end scheduling and are going to update this environment to 8.2 level, you should plan for use of the new functions and possibilities in Tivoli Workload Scheduler for z/OS 8.2 end-to-end scheduling. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 137
  • 154. Be aware especially of the new possibilities introduced by: Centralized script Are you using non-centralized script in the Tivoli Workload Scheduler for z/OS 8.1 scheduling environment? Will it be better or more efficient to use centralized scripts? If centralized scripts are going to be used, you should plan for necessary activities to have the non-centralized scripts consolidated in Tivoli Workload Scheduler for z/OS controller JOBLIB. JCL variables in centralized or non-centralized scripts or both In Tivoli Workload Scheduler for z/OS 8.2 you can use Tivoli Workload Scheduler for z/OS JCL variables in centralized and non-centralized scripts. If you have implemented some locally developed workaround in Tivoli Workload Scheduler for z/OS 8.1 to use JCL variables in the Tivoli Workload Scheduler for z/OS non-centralized script, you should consider using the new possibilities in Tivoli Workload Scheduler for z/OS 8.2. Recovery possible for jobs with non-centralized and centralized scripts. Will or can use of recovery in jobs with non-centralized or centralized scripts improve your end-to-end scheduling? Is it something you should use in your Tivoli Workload Scheduler for z/OS 8.2 environment? Should the Tivoli Workload Scheduler for z/OS 8.1 job definitions be updated or changed to use these new recovery possibilities? Here again, some planning and considerations will be of great value. New options and possibilities when defining fault-tolerant workstation jobs in Tivoli Workload Scheduler for z/OS and working with fault-tolerant workstations. Tivoli Workload Scheduler for z/OS 8.2 introduces some new options in the legacy ISPF dialog as well as in the JSC, when defining fault-tolerant jobs in Tivoli Workload Scheduler for z/OS. Furthermore, the legacy ISPF dialogs have changed and improved, and new options have been added to work more easily with fault-tolerant workstations. Be prepared to educate your planners and operations so they know how to use these new options and functions! End-to-end scheduling is greatly improved in Version 8.2 of Tivoli Workload Scheduler for z/OS. Together with this improvement, several initialization statements have been changed. Furthermore, the network configuration for the end-to-end environment can be designed in another way in Tivoli Workload 138 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 155. Scheduler for z/OS 8.2 because, for example, Tivoli Workload Scheduler for z/OS 8.2 supports more than one first-level domain manager. To summarize: Expect to take some time to plan your upgrade from Tivoli Workload Scheduler for z/OS Version 8.1 end-to-end scheduling to Version 8.2 end-to-end scheduling, because Tivoli Workload Scheduler for z/OS Version 8.2 has been improved with many new functions and initialization parameters. Plan to have some time to investigate and read the new Tivoli Workload Scheduler for z/OS 8.2 documentation (remember to use the April 2004 Revised versions) to get a good understanding of the new end-to-end scheduling possibilities in Tivoli Workload Scheduler for z/OS Version 8.2 compared to V8.1 Furthermore, plan time to test and verify the use of the new functions and possibilities in Tivoli Workload Scheduler for z/OS 8.2 end-to-end scheduling. 3.5 Planning for end-to-end scheduling with Tivoli Workload Scheduler In this section, we discuss how to plan end-to-end scheduling for Tivoli Workload Scheduler. We show how to configure your environment to fit your requirements, and we point you to special considerations that apply to the end-to-end solution with Tivoli Workload Scheduler for z/OS. 3.5.1 Tivoli Workload Scheduler publications and documentation Hardcopy Tivoli Workload Scheduler documentation is not shipped with the product. The books are available in PDF format on the Tivoli Workload Scheduler 8.2 product CD-ROM. Note: The publications are also available for download in PDF format at: http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html Look for books marked with “Revised April 2004,” as they have been updated with documentation changes that were introduced by service (fix pack) for Tivoli Workload Scheduler that was produced since the base version of the product was released in June 2003. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 139
  • 156. 3.5.2 Tivoli Workload Scheduler service updates (fix packs) Before installing Tivoli Workload Scheduler, it is important to check for the latest service (fix pack) for Tivoli Workload Scheduler. Service for Tivoli Workload Scheduler is released in packages that normally contain a full replacement of the Tivoli Workload Scheduler code. These packages are called fix packs and are numbered FixPack 01, FixPack 02, and so forth. New fix packs are usually released every three months. The base version of Tivoli Workload Scheduler must be installed before a fix pack can be installed. Check for the latest fix pack level and download it so that you can update your Tivoli Workload Scheduler installation and test the end-to-end scheduling environment on the latest fix pack level. Tip: Fix packs for Tivoli Workload Scheduler can be downloaded from: ftp://ftp.software.ibm.com Log on with user ID anonymous and your e-mail address for the password. Fix packs for Tivoli Workload Scheduler are in this directory: /software/tivoli_support/patches/patches_8.2.0. At time of writing this book the latest fix pack for Tivoli Workload Scheduler was FixPack 04. When the fix pack is downloaded, installation guidelines can be found in the 8.2.0-TWS-FP04.README file. Note: FixPack 04 introduces a new Fault-Tolerant Switch Feature, which is described in a PDF file named FaultTolerantSwitch.README. The new Fault-Tolerant Switch Feature replaces and enhances the existing or traditional Fault-Tolerant Switch Manager for backup domain managers. The Tivoli Workload Scheduler documentation has been updated to FixPack 03 in the “April 2004 Revised” versions of the Tivoli Workload Scheduler manuals. As mentioned in 3.5.1, “Tivoli Workload Scheduler publications and documentation” on page 139, the latest versions of the Tivoli Workload Scheduler manuals can be downloaded from the IBM Web site. 3.5.3 System and software requirements System and software requirements for installing and running Tivoli Workload Scheduler is described in great detail in the IBM Tivoli Workload Scheduler Release Notes Version 8.2 (Maintenance Release April 2004), SC32-1277. 140 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 157. It is very important to consult and read this release-notes document before installing Tivoli Workload Scheduler because release notes contain system and software requirements, as well as the latest installation and upgrade notes. 3.5.4 Network planning and considerations Before you install Tivoli Workload Scheduler, be sure that you know about the various configuration examples. Each example has specific benefits and disadvantages. Here are some guidelines to help you to find the right choice: How large is your IBM Tivoli Workload Scheduler network? How many computers does it hold? How many applications and jobs does it run? The size of your network will help you decide whether to use a single domain or the multiple-domain architecture. If you have a small number of computers or a small number of applications to control with Tivoli Workload Scheduler, there may not be a need for multiple domains. How many geographic locations will be covered in your Tivoli Workload Scheduler network? How reliable and efficient is the communication between locations? This is one of the primary reasons for choosing a multiple-domain architecture. One domain for each geographical location is a common configuration. If you choose single domain architecture, you will be more reliant on the network to maintain continuous processing. Do you need centralized or decentralized management of Tivoli Workload Scheduler? A Tivoli Workload Scheduler network, with either a single domain or multiple domains, gives you the ability to manage Tivoli Workload Scheduler from a single node, the master domain manager. If you want to manage multiple locations separately, you can consider installing a separate Tivoli Workload Scheduler network at each location. Note that some degree of decentralized management is possible in a stand-alone Tivoli Workload Scheduler network by mounting or sharing file systems. Do you have multiple physical or logical entities at a single site? Are there different buildings with several floors in each building? Are there different departments or business functions? Are there different applications? These may be reasons for choosing a multi-domain configuration, such as a domain for each building, department, business function, or application (manufacturing, financial, engineering). Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 141
  • 158. Do you run applications, such as SAP R/3, that operate with Tivoli Workload Scheduler? If they are discrete and separate from other applications, you may choose to put them in a separate Tivoli Workload Scheduler domain. Would you like your Tivoli Workload Scheduler domains to mirror your Windows NT domains? This is not required, but may be useful. Do you want to isolate or differentiate a set of systems based on performance or other criteria? This may provide another reason to define multiple Tivoli Workload Scheduler domains to localize systems based on performance or platform type. How much network traffic do you have now? If your network traffic is manageable, the need for multiple domains is less important. Do your job dependencies cross system boundaries, geographical boundaries, or application boundaries? For example, does the start of Job1 on workstation3 depend on the completion of Job2 running on workstation4? The degree of interdependence between jobs is an important consideration when laying out your Tivoli Workload Scheduler network. If you use multiple domains, you should try to keep interdependent objects in the same domain. This will decrease network traffic and take better advantage of the domain architecture. What level of fault tolerance do you require? An obvious disadvantage of the single domain configuration is the reliance on a single domain manager. In a multi-domain network, the loss of a single domain manager affects only the agents in its domain. 3.5.5 Backup domain manager Each domain has a domain manager and, optionally, one or more backup domain managers. A backup domain manager (Figure 3-4 on page 143) must be in the same domain as the domain manager it is backing up. The backup domain managers must be fault-tolerant agents running the same product version of the domain manager they are supposed to replace, and must have the Resolve Dependencies and Full Status options enabled in their workstation definitions. If a domain manager fails during the production day, you can use either the Job Scheduling Console, or the switchmgr command in the console manager command line (conman), to switch to a backup domain manager. A switch manager action can be executed by anyone with start and stop access to the domain manager and backup domain manager workstations. 142 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 159. A switch manager operation stops the backup manager then restarts it as the new domain manager and converts the old domain manager to a fault-tolerant agent. The identities of the current domain managers are documented in the Symphony files on each FTA and remain in effect until a new Symphony file is received from the master domain manager (OPCMASTER). MASTERDM z/OS Master Domain Manager OPCMASTER DomainA DomainB AIX AIX Domain Domain Manager Manager FDMA FDMB FTA1 FTA3 BDM for FTA2 BDM for FTA4 DomainA DomainB AIX OS/400 AIX Solaris Figure 3-4 Backup domain managers (BDM) within a end-to-end scheduling network As mentioned in 2.3.5, “Making the end-to-end scheduling system fault tolerant” on page 84, a switch to a backup domain manager remains in effect until a new Symphony file is received from the master domain manager (OPCMASTER in Figure 3-4). If the switch to the backup domain manager will be active across the Tivoli Workload Scheduler for z/OS plan extension or replan, you must change the topology definitions in the Tivoli Workload Scheduler for z/OS DOMREC initialization statements. The backup domain manager fault-tolerant workstation should be changed to the domain manager for the domain. Example 3-5 shows how DOMREC for DomainA is changed so that the backup domain manager FTA1 in Figure 3-4 is the new domain manager for DomainA. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 143
  • 160. Because the change is also made in the DOMREC topology definition (in connection with the switch domain manager from FDMA to FTA1), FTA1 remains domain manager even if the Symphony file is recreated with Tivoli Workload Scheduler for z/OS plan extend or replan jobs. Example 3-5 Change in DOMREC for long-term switch to backup domain manager FTA1 DOMREC DOMAIN(DOMAINA) DOMMGR(FDMA) DOMPARENT(OPCMASTER) Should be changed to: DOMREC DOMAIN(DOMAINA) DOMMGR(FTA1) DOMPARENT(OPCMASTER) Where FDMA is the name of the fault tolerant workstation that is domain manager before the switch. 3.5.6 Performance considerations Tivoli Workload Scheduler 8.1 introduced some important performance-related initialization parameters. These can be used to optimize or tune Tivoli Workload Scheduler networks. If you suffer from poor performance and have already isolated the bottleneck on the Tivoli Workload Scheduler side, you may want to take a closer look at the localopts parameters listed in Table 3-5 (default values shown in the table). Table 3-5 Performance-related localopts parameter Syntax Default value mm cache mailbox=yes/no No mm cache size = bytes 32 sync level=low/medium/high High wr enable compression=yes/no No These localopts parameters are described in detail in the following sections. For more information, check the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273, and the redbook IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices, SG24-6628. Mailman cache (mm cache mailbox and mm cache size) Tivoli Workload Scheduler can read groups of messages from a mailbox and put them into a memory cache. Access to disk through cache is extremely faster than accessing to disk directly. The advantage is even more relevant if you think that traditional mailman needs at least two disk accesses for every mailbox message. 144 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 161. Important: mm cache mailbox parameter can be used on both UNIX and Windows workstations. This option is not applicable (has no effect) on USS. A special mechanism ensures that messages that are considered essential are not put into cache but are handled immediately. This avoids loss of vital information in case of a mailman failure. The settings in the localopts file regulate the behavior of mailman cache: mm cache mailbox The default is no. Specify yes to enable mailman to use a reading cache for incoming messages. mm cache size Specify this option only if you use the mm cache mailbox option. The default is 32 bytes, which should be a reasonable value for most small and medium-sized Tivoli Workload Scheduler installations. The maximum value is 512, and higher values are ignored. Tip: If necessary, you can experiment with increasing this setting gradually for better performance. You can use values larger than 32 bytes for large networks, but in small networks do not set this value unnecessarily large, because this would reduce the available memory that could be allocated to other applications or other Tivoli Workload Scheduler processes. File system synchronization level (sync level) Sync level attribute specifies the frequency at which Tivoli Workload Scheduler synchronizes messages held on disk with those in memory. There are three possible settings: Low Lets the operating system handle the speed of write access. This option speeds up all processes that use mailbox files. Disk usage is notably reduced, but if the file system is reliable the data integrity should be assured anyway. Medium Makes an update to the disk after a transaction has completed. This setting could be a good trade-off between acceptable performance and high security against loss of data. Write is transaction-based; data written is always consistent. High (default setting) Makes an update every time data is entered. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 145
  • 162. Important considerations for the sync level usage: For most UNIX systems (especially new UNIX systems with reliable disk subsystems), a setting of low or medium is recommended. In end-to-end scheduling, we recommend that you set this at low, because host disk subsystems are considered as highly reliable systems. This option is not applicable on Windows systems. Regardless of the sync level value that you set in the localopts file, Tivoli Workload Scheduler makes an update every time data is entered for messages that are considered as essential (uses sync level=high for the essential messages). Essential messages are considered the utmost important by the Tivoli Workload Scheduler. Sinfonia file compression (wr enable compression) Starting with Tivoli Workload Scheduler 8.1, domain managers may distribute Sinfonia files to their FTAs in compressed form. Each Sinfonia record is compressed by mailman domain managers, then decompressed by writer FTA. A compressed Sinfonia record is about 7 times smaller in size. It can be particularly useful when a Symphony file is huge and network connection between two nodes is slow or not reliable (WAN). If any FTAs in the network have pre-8.1 versions of Tivoli Workload Scheduler, Tivoli Workload Scheduler domain managers can send Sinfonia files to these workstations in uncompressed form. The following localopts setting is used to set compression in Tivoli Workload Scheduler: wr enable compression=yes: This means that Sinfonia will be compressed. The default is no. Tip: Due to the overhead of compression and decompression, we recommend that you use compression if Sinfonia is 4 MB or larger. 3.5.7 Fault-tolerant agent (FTA) naming conventions Each FTA represents a physical machine within a Tivoli Workload Scheduler network. Depending on the size of your distributed environment or network and how much it can grow in the future, it makes sense to think about naming conventions for your FTAs and eventually Tivoli Workload Scheduler domains. A good naming convention for FTAs and domains can help to identify an FTA easily in terms of where it is located or the business unit it belongs to. This becomes more important in end-to-end scheduling environments because the length of the workstation name for an FTA is limited in Tivoli Workload Scheduler for z/OS. 146 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 163. Note: The name of any workstation in Tivoli Workload Scheduler for z/OS workstations for fault-tolerant agents included end-to-end is limited to four digits. The name must be alphanumeric, where the first digit must be alphabetical or national. Figure 3-5 on page 147 shows a typical end-to-end network. It consists of two domain managers at the first level, two backup domain managers, and some FTAs. MASTERDM z/OS Master Domain Manager OPCMASTER Domain1 Domain2 AIX AIX Domain Domain Manager Manager F100 F200 F101 F201 BDM for F102 BDM for F202 DomainA DomainB AIX OS/400 AIX Solaris Figure 3-5 Example of naming convention for FTA workstations in end-to-end network In Figure 3-5, we have illustrated one naming convention for the fault-tolerant workstations in Tivoli Workload Scheduler for z/OS. The idea with this naming convention is the following: First digit Character F is used to identify the workstation as an FTA. It will, for example, be possible to make lists in the legacy ISPF interface and in the JSC that shows all FTAs. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 147
  • 164. Second digit Character or number used to identify the domain for the workstation. Third and fourth digits Use to allow a high number of uniquely named servers or machines. The last two digits are reserved for the numbering of each workstation. With this naming convention there will be room to define 1296 (that is, 36*36) fault-tolerant workstations for each domain named F1** to FZ**. If the domain manager fault-tolerant workstation for the first domain is named F100 (F000 is not used), it will be possible to define 35 domains with 1296 FTAs in each domain — that is, 45360 FTAs. This example is meant to give you an idea of the number of fault-tolerant workstations that can be defined, even using only four digits in the name. In the example, we did not change the first character in the workstation name: It was “fixed” at F. It is, of course, possible to use different characters here as well; for example, one could use D for domain managers and F for fault-tolerant agents. Changing the first digit in the workstation name increases the total number of fault-tolerant workstations that can be defined in Tivoli Workload Scheduler for z/OS and cannot cover all specific requirements. It demonstrates only that the naming needs careful consideration. Because a four-character name for the FTA workstation does not tell much about the server name or IP address for the server where the FTA is installed, another good naming convention for the FTA workstations is to have the server name (the DNS name or maybe the IP address) in the description field for the workstation in Tivoli Workload Scheduler for z/OS. The description field for workstations in Tivoli Workload Scheduler for z/OS allows up to 32 characters. This way, it is much easier to relate the four-character workstation name to a specific server in your distributed network. Example 3-6 shows how the description field can relate the four-character workstation name to the server name for the fault-tolerant workstations used in Figure 3-5 on page 147. Tip: The host name in the workstation description field, in conjunction with the four-character workstation name, provides an easy way to illustrate your configured environment. Example 3-6 Workstation description field (copy of workstation list in the ISPF panel) Work station T R Last update name description user date time 148 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 165. F100 COPENHAGEN - AIX DM for Domain1 C A CCFBK 04/07/16 14.59 F101 STOCKHOLM - AIX BDM for Domain1 C A CCFBK 04/07/16 15.00 F102 OSLO - OS/400 LFTA in DM1 C A CCFBK 04/07/16 15.00 F200 ROM - AIX DM for Domain2 C A CCFBK 04/07/16 15.02 F201 MILANO - AIX BDM for Domain2 C A CCFBK 04/07/16 15.08 F202 VENICE - SOLARIS FTA in DM2 C A CCFBK 04/07/16 15.17 3.6 Planning for the Job Scheduling Console In this section, we discuss planning considerations for the Tivoli Workload Scheduler Job Scheduling Console (JSC). The JSC is not a required component when running end-to-end scheduling with Tivoli Workload Scheduler. The JSC provides a unified GUI to different job-scheduling engines, Tivoli Workload Scheduler for z/OS controller, and Tivoli Workload Scheduler master domain manager, domain managers, and fault-tolerant agents. Job Scheduling Console 1.3 is the version that is delivered and used with Tivoli Workload Scheduler 8.2 and Tivoli Workload Scheduler for z/OS 8.2. The JSC code is shipped together with the Tivoli Workload Scheduler for z/OS or the Tivoli Workload Scheduler code. With the JSC, it is possible to work with different Tivoli Workload Scheduler for z/OS controllers (such as test and production) from one GUI. Also, from this same GUI, the user can at the same time work with Tivoli Workload Scheduler master domain managers or fault-tolerant agents. In end-to-end scheduling environments, the JSC can be a helpful tool when analyzing problems with the end-to-end scheduling network or for giving some dedicated users access to their own servers (fault-tolerant agents). The JSC is installed locally on your personal desktop, laptop, or workstation. Before you can run and use the JSC, the following additional components must be installed and configured: Tivoli Management Framework, V3.7.1 or V4.1 Installed and configured in Tivoli Management Framework – Job Scheduling Services (JSS) – Tivoli Workload Scheduler connector – Tivoli Workload Scheduler for z/OS connector – JSC instances for Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS environments Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 149
  • 166. Server started task on mainframe used for JSC communication This server started task is necessary to communicate and work with Tivoli Workload Scheduler for z/OS from the JSC. 3.6.1 Job Scheduling Console documentation The documentation for the Job Scheduling Console includes: IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes (Maintenance Release April 2004), SC32-1277 IBM Tivoli Workload Scheduler Job Scheduling Console Users Guide (Maintenance Release April 2004), SC32-1257. This manual contains information about how to: – Install and update the JSC. – Install and update JSS, Tivoli Workload Scheduler connector and Tivoli Workload Scheduler for z/OS connector. – Create Tivoli Workload Scheduler connector instances and Tivoli Workload Scheduler for z/OS connector instances. – Use the JSC to work with Tivoli Workload Scheduler. – Use the JSC to work with Tivoli Workload Scheduler for z/OS. The documentation is not shipped in hardcopy form with the JSC code, but is available in PDF format on the JSC Version 1.3 CD-ROM. Note: The publications are also available for download in PDF format at: http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html Here you can find the newest versions of the books. Look for books marked with “Maintenance Release April 2004” because they have been updated with documentation changes introduced after the base version of the product was released in June 2003. 3.6.2 Job Scheduling Console service (fix packs) Before installing the JSC, it is important to check for and, if necessary, download latest service (fix pack) level. Service for the JSC is released in packages that normally contain a full replacement of it. These packages are called fix packs and are numbered FixPack 01, FixPack 02, and so forth. Usually, a new fix pack is released once every three months. The base version of the JSC must be installed before a fix pack can be installed. 150 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 167. Tip: Fix packs for JSC can be downloaded from the IBM FTP site: ftp://ftp.software.ibm.com Log in with user ID anonymous and use your e-mail address for the password. Look for JSC fix packs in the /software/tivoli_support/patches/patches_1.3.0 directory. Installation guidelines are in the 1.3.0-JSC-FP05.README text file. As this book is written, the latest fix pack for the JSC was FixPack 05. It is important to note that the JSC fix pack level should correspond to the connector FixPack level, that is apply the same fix pack level to the JSC and to the connector at the same time. Note: FixPack 05 improves performance for the JSC in two areas: Response time improvements Memory consumption improvements 3.6.3 Compatibility and migration considerations for the JSC The Job Scheduling Console feature level 1.3 can work with different versions of Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS. Before installing the Job Scheduling Console, consider Table 3-6 and Table 3-7 on page 152, which summarize the supported interoperability combinations between the Job Scheduling Console, the connectors, and the Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS engines. Table 3-6 shows the supported combinations of JSC, Tivoli Workload Scheduler connectors, and Tivoli Workload Scheduler engine (master domain manager, domain manager, or fault-tolerant agent). Table 3-6 Tivoli Workload Scheduler connector and engine combinations Job Scheduling Console Connector Tivoli Workload Scheduler engine 1.3 8.2 8.2 1.3 8.1 8.1 1.2 8.2 8.2 Note: The engine can be a fault-tolerant agent, a domain manager, or a master domain manager. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 151
  • 168. Table 3-7 shows the supported combinations of JSC, Tivoli Workload Scheduler for z/OS connectors, and Tivoli Workload Scheduler for z/OS engine (controller). Table 3-7 Tivoli Workload Scheduler for z/OS connector and engine combinations Job Scheduling Console Connector IBM Tivoli Workload Scheduler for z/OS engine (controller) 1.3 1.3 8.2 1.3 1.3 8.1 1.3 1.3 2.3 (Tivoli OPC) 1.3 1.2 8.1 1.3 1.2 2.3 (Tivoli OPC) 1.2 1.3 8.2 1.2 1.3 8.1 1.2 1.3 2.3 (Tivoli OPC) Note: If your environment comprises installations of updated and back-level versions of the products, some functionalities might not work correctly. For example, the new functionalities such as Secure Socket Layer (SSL) protocol support, return code mapping, late job handling, extended task name and recovery information for z/OS jobs are not supported by the Job Scheduling Console feature level 1.2. A warning message is displayed if you try to open an object created with the new functionalities, and the object is not opened. Satisfy the following requirements before installing The following software and hardware prerequisites and other considerations should be taken care of before installing the JSC. Software The following is required software: Tivoli Management Framework Version 3.7.1 with FixPack 4 or higher Tivoli Job Scheduling Services 1.2 Hardware The following is required hardware: CD-ROM drive Approximately 200 MB free disk space for installation of the JSC At least 256 MB RAM (preferably 512 MB RAM) 152 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 169. Other The Job Scheduling Console can be installed on any workstation that has a TCP/IP connection. It can connect only to a server or workstation that has properly configured installations of the following products: Job Scheduling Services and IBM Tivoli Workload Scheduler for z/OS connector (mainframe-only scheduling solution) Job Scheduling Services and Tivoli Workload Scheduler connector (distributed-only scheduling solution) Job Scheduling Services, IBM Tivoli Workload Scheduler for z/OS connector, and Tivoli Workload Scheduler Connector (end-to-end scheduling solution) The latest and most up-to-date system and software requirements for installing and running the Job Scheduling Console are described in great detail in the IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level 1.3, SC32-1258 (remember to get the April 2004 revision). It is important to consult and read this release notes document before installing the JSC because release notes contain system and software requirements as well as the latest installation and upgrade notes. 3.6.4 Planning for Job Scheduling Console availability The legacy GUI gconman and gcomposer are no longer included with Tivoli Workload Scheduler, so the Job Scheduling Console fills the roles of those program as the primary interface to Tivoli Workload Scheduler. Staff that work only with the JSC and are not familiar with the command line interface (CLI) depend on continuous JSC availability. This requirement must be taken into consideration when planning for a Tivoli Workload Scheduler backup domain manager. We therefore recommend that there be a Tivoli Workload Scheduler connector instance on the Tivoli Workload Scheduler backup domain manager. This guarantees JSC access without interruption. Because the JSC communicates with Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler domain managers, and Tivoli Workload Scheduler backup domain managers through one IBM Tivoli Management Framework (Figure 3-6 on page 154), this framework can be a single point of failure. Consider establishing a backup Tivoli Management Framework or minimize the risk for outage in the framework by using (for example) clustering techniques. You can read more about how to make a Tivoli Management Framework fail-safe in the redbook High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli Framework, SG24-6632. Figure 3-6 on page 154 shows two domain managers at the first level directly connected to Tivoli Workload Scheduler for z/OS (OPC). In end-to-end scheduling environments it is, as Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 153
  • 170. mentioned earlier, advisable to plan and install connectors and prerequisite components (Tivoli Management Framework and Job Scheduling Services) on all first-level domain managers. MASTERDM OPC z/OS Databases Master JSC Server Domain Current Plan Manager DomainA DomainB AIX TWS AIX TWS Domain TWS OPC Domain TWS OPC Manager Connector Connector Manager Connector Connector DMA Symphony Framework DMA Symphony Framework Other DMs and Other DMs and FTAs FTAs Job Scheduling Console Figure 3-6 JSC connections in an end-to-end environment 3.6.5 Planning for server started task for JSC communication To use the JSC to communicate with Tivoli Workload Scheduler for z/OS, it is necessary for the z/OS system to have a started task that handles IP communication with the JSC (more precisely, with the Tivoli Workload Scheduler for z/OS (OPC) Connector in the Tivoli Management Framework) (Figure 3-6). The same server started task can be used for JSC communication and for the end-to-end scheduling. We recommend having two server started tasks, one dedicated for end-to-end scheduling and one dedicated for JSC communication. With two server started tasks, the JSC server started task can be stopped and started without any impact on the end-to-end scheduling network. 154 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 171. The JSC server started task acts as the communication layer between the Tivoli Workload Scheduler for z/OS connector in the Tivoli Management Framework and the Tivoli Workload Scheduler for z/OS controller. 3.7 Planning for migration or upgrade from previous versions If you are running end-to-end scheduling with Tivoli Workload Scheduler for z/OS Version 8.1 and Tivoli Workload Scheduler Version 8.1, you should plan how to do the upgrade or migration from Version 8.1 to Version 8.2. This is also the case if you are running an even older version, such as Tivoli OPC Version 2.3.0, Tivoli Workload Scheduler 7.0, or Maestro 6.1. Tivoli Workload Scheduler 8.2 supports backward compatibility so you can upgrade your network gradually, at different times, and in no particular order. You can upgrade top-down — that is, upgrade the Tivoli Workload Scheduler for z/OS controller (master) first, then the domain managers at the first level, then the subordinate domain managers and fault-tolerant agents — or upgrade bottom-up by starting with the fault-tolerant agents, then upgrading in sequence, leaving the Tivoli Workload Scheduler for z/OS controller (master) for last. However, if you upgrade the Tivoli Workload Scheduler for z/OS controller first, some new Version 8.2 functions, (firewall support, centralized script) will not work until the whole network is upgraded. During the upgrade procedure, the installation backs up all of the configuration information, installs the new product code, and automatically migrates old scheduling data and configuration information. However, it does not migrate user files or directories placed in the Tivoli Workload Scheduler for z/OS server work directory or in the Tivoli Workload Scheduler TWShome directory. Before doing the actual installation, you should decide on the migration or upgrade strategy that will be best in your end-to-end scheduling environment. This is also the case if you are upgrading from old Tivoli OPC tracker agents or if you decide to merge a stand-alone Tivoli Workload Scheduler environment with your Tivoli Workload Scheduler for z/OS environment to have a new end-to-end scheduling environment. Our experience is that installation and upgrading of an existing end-to-end scheduling environment takes some time, and the time required depends on the size of the environment. It is good to be prepared form the first day and to try to make some good and realistic implementation plans and schedules. Chapter 3. Planning end-to-end scheduling with Tivoli Workload Scheduler 8.2 155
  • 172. Another important thing to remember is that Tivoli Workload Scheduler end-to-end scheduling has been improved and has changed considerably from Version 8.1 to Version 8.2. If you are running Tivoli Workload Scheduler 8.1 end-to-end scheduling and are planning to upgrade to Version 8.2 end-to-end scheduling, we recommend that you: 1. First do a “one-to-one” upgrade from Tivoli Workload Scheduler 8.1 end-to-end scheduling to Tivoli Workload Scheduler 8.2 end-to-end scheduling. 2. When the upgrade is completed and you are running Tivoli Workload Scheduler 8.2 end-to-end scheduling in the whole network, then start to implement the new functions and facilities that were introduced in Tivoli Workload Scheduler for z/OS 8.2 and Tivoli Workload Scheduler 8.2. 3.8 Planning for maintenance or upgrades The Tivoli maintenance strategy for Tivoli Workload Scheduler introduces a new way to maintain the product more effectively and easily. On a quarterly basis, Tivoli provides updates with recent patches and offers a fix pack that is similar to a maintenance release. This fix pack can be ordered either via the common support Web page ftp://ftp.software.ibm.com/software/tivoli_support/patches, or shipped on a CD. Ask your local Tivoli support for more details. In this book, we have recommended upgrading your end-to-end scheduling environment to FixPack 04 level. This level will change with time, of course, so when you start the installation you should plan to download and install the latest fix pack level. 156 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 173. 4 Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling When planning as described in the previous chapter is completed, it is time to install the software (Tivoli Workload Scheduler for z/OS V8.2 and Tivoli Workload Scheduler V8.2 and, optionally, Tivoli Workload Scheduler Job Scheduling Console V1.3) and configure the installed software for end-to-end scheduling. In this chapter, we provide details on how to install and configure Tivoli Workload Scheduler end-to-end scheduling and Job Scheduling Console (JSC), including how to perform the installation and the necessary steps involved. We describe installation of: IBM Tivoli Workload Scheduler for z/OS V8.2 IBM Tivoli Workload Scheduler V8.2 IBM Tivoli Workload Scheduler Job Scheduling Console V1.3 We also describe installation of the components that are required to run the JSC. © Copyright IBM Corp. 2004 157
  • 174. 4.1 Before the installation is started Before you start the installation, it is important to understand that Tivoli Workload Scheduler end-to-end scheduling involves two components: IBM Tivoli Workload Scheduler for z/OS IBM Tivoli Workload Scheduler The Tivoli Workload Scheduler Job Scheduling Console is not a required product, but our experience from working with the Tivoli Workload Scheduler end-to-end scheduling environment is that the JSC is a very nice and helpful tool to have for troubleshooting or for new users who do not know much about job scheduling, Tivoli Workload Scheduler, or Tivoli Workload Scheduler for z/OS. The overall installation and customization process is not complicated and can be narrowed down to the following steps: 1. Design the topology (for example, domain hierarchy or number of domains) for the distributed Tivoli Workload Scheduler network in which Tivoli Workload Scheduler for z/OS will do the workload scheduling. Use the guidelines in 3.5.4, “Network planning and considerations” on page 141 when designing the topology. 2. Install and verify the Tivoli Workload Scheduler for z/OS controller and end-to-end server tasks in the host environment. Installation and verification of Tivoli Workload Scheduler for z/OS end-to-end scheduling is described in 4.2, “Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 159. Note: If you run on a previous release of IBM Tivoli Workload Scheduler for z/OS (OPC), you should also migrate from this release to Tivoli Workload Scheduler for z/OS 8.2 as part of the installation. Migration steps are described in the Tivoli Workload Scheduler for z/OS Installation Guide, SH19-4543. Migration is performed with a standard program supplied with Tivoli Workload Scheduler for z/OS. 3. Install and verify the Tivoli Workload Scheduler distributed workstations (fault-tolerant agents). Installation and verification of the Tivoli Workload Scheduler distributed workstations is described in 4.3, “Installing Tivoli Workload Scheduler in an end-to-end environment” on page 207. 158 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 175. Important: These workstations can be installed and configured before the Tivoli Workload Scheduler for z/OS components, but it will not be possible to test the connections before the mainframe components are installed and ready. 4. Define and activate fault-tolerant workstations (FTWs) in the Tivoli Workload Scheduler for z/OS controller: – Define FTWs in the Tivoli Workload Scheduler for z/OS database. – Activate the FTW definitions by running the plan extend or replan batch job. – Verify that the workstations are active and linked. This is described in 4.4, “Define, activate, verify fault-tolerant workstations” on page 211. 5. Create fault-tolerant workstation jobs and job streams for the jobs to be executed on the FTWs, using either centralized script, non-centralized script, or a combination. This is described in 4.5, “Creating fault-tolerant workstation job definitions and job streams” on page 217. 6. Do a verification test of the Tivoli Workload Scheduler for z/OS end-to-end scheduling. The verification test is used to verify that the Tivoli Workload Scheduler for z/OS controller can schedule and track jobs on the FTWs. The verification test should also confirm that it is possible to browse the job log for completed jobs run on the FTWs. This is described in 4.6, “Verification test of end-to-end scheduling” on page 235. If you would like to use the Job Scheduling Console to work with Tivoli Workload Scheduler for z/OS, Tivoli Workload Scheduler, or both, you should also activate support for the JSC. The necessary installation steps for activating support for the JSC are described in 4.7, “Activate support for the Tivoli Workload Scheduler Job Scheduling Console” on page 245. 4.2 Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling In this section, we guide you though the installation process of Tivoli Workload Scheduler for z/OS, especially the end-to-end feature. We do not duplicate the Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 159
  • 176. entire installation of the base product, which is described in the IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264. To activate support for end-to-end scheduling in Tivoli Workload Scheduler for z/OS to be able to schedule jobs on the Tivoli Workload Scheduler FTAs, follow these steps: 1. Run EQQJOBS and specify Y for the end-to-end feature. See 4.2.1, “Executing EQQJOBS installation aid” on page 162. 2. Define controller (engine) and tracker (agent) subsystems in SYS1.PARMLIB. See 4.2.2, “Defining Tivoli Workload Scheduler for z/OS subsystems” on page 167. 3. Allocate the end-to-end data sets running the EQQPCS06 sample generated by EQQJOBS. See 4.2.3, “Allocate end-to-end data sets” on page 168. 4. Create and customize the work directory by running the EQQPCS05 sample generated by EQQJOBS. See 4.2.4, “Create and customize the work directory” on page 170. 5. Create started task procedures for Tivoli Workload Scheduler for z/OS See 4.2.5, “Create started task procedures for Tivoli Workload Scheduler for z/OS” on page 173. 6. Define workstation (CPU) configuration and domain organization by using the CPUREC and DOMREC statements in a new PARMLIB member. (The default member name is TPLGINFO.) See 4.2.6, “Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 174, “DOMREC statement” on page 185, “CPUREC statement” on page 187, and Figure 4-6 on page 176. 7. Define Windows user IDs and passwords by using the USRREC statement in a new PARMLIB member. (The default member name is USRINFO.) It is important to remember that you have to define Windows user IDs and passwords only if you have fault-tolerant agents on Windows-supported platforms and want to schedule jobs to be run on these Windows platforms. See “USRREC statement” on page 195. 160 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 177. 8. Define the end-to-end configuration by using the TOPOLOGY statement in a new PARMLIB member. (The default member name is TPLGPARM.) The TOPOLOGY statement is described in “TOPOLOGY statement” on page 178. In the TOPOLOGY statement, you should specify the following: – For the TPLGYMEM keyword, write the name of the member used in step 6. (See Figure 4-6 on page 176.) – For the USRMEM keyword, write the name of the member used in step 7 on page 160. (See Figure 4-6 on page 176.) 9. Add the TPLGYSRV keyword to the OPCOPTS statement in the Tivoli Workload Scheduler for z/OS controller to specify the server name that will be used for end-to-end communication. See “OPCOPTS TPLGYSRV(server_name)” on page 176. 10.Add the TPLGYPRM keyword to the SERVOPTS statement in the Tivoli Workload Scheduler for z/OS end-to-end server to specify the member name used in step 8 on page 161. This step activates end-to-end communication in the end-to-end server started task. See “SERVOPTS TPLGYPRM(member name/TPLGPARM)” on page 177. 11.Add the TPLGYPRM keyword to the BATCHOPT statement to specify the member name used in step 8 on page 161. This step activates the end-to-end feature in the plan extend, plan replan, and Symphony renew batch jobs. See “TPLGYPRM(member name/TPLGPARM) in BATCHOPT” on page 177. 12.Optionally, you can customize the way the job name is generated in the Symphony file by the Tivoli Workload Scheduler for z/OS plan extend, replan, and Symphony renew batch jobs. The job name in the Symphony file can be tailored or customized by the JTOPTS TWSJOBNAME() parameter. See 4.2.9, “The JTOPTS TWSJOBNAME() parameter” on page 200 for more information. If you decide to customize the job name layout in the Symphony file, be aware that it can require that you reallocate the EQQTWSOU data set with larger record length. See “End-to-end input and output data sets” on page 168 for more information. Note: The JTOPTS TWSJOBNAME() parameter was introduced by APAR PQ77970. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 161
  • 178. 13.Verify that the Tivoli Workload Scheduler for z/OS controller and server started tasks can be started (or restarted if already running) and verify that everything comes up correctly. Verification is described in 4.2.10, “Verify end-to-end installation in Tivoli Workload Scheduler for z/OS” on page 203. 4.2.1 Executing EQQJOBS installation aid EQQJOBS is a CLIST-driven ISPF dialog that can help you install Tivoli Workload Scheduler for z/OS. EQQJOBS assists in the installation of the engine and agent by building batch-job JCL that is tailored to your requirements. To make EQQJOBS executable, allocate these libraries to the DD statements in your TSO session: SEQQCLIB to SYSPROC SEQQPNL0 to ISPPLIB SEQQSKL0 and SEQQSAMP to ISPSLIB Use EQQJOBS installation aid as follows: 1. To invoke EQQJOBS, enter the TSO command EQQJOBS from an ISPF environment. The primary panel shown in Figure 4-1 appears. EQQJOBS0 ------------ EQQJOBS application menu -------------- Select option ===> 1 - Create sample job JCL 2 - Generate OPC batch-job skeletons 3 - Generate OPC Data Store samples X - Exit from the EQQJOBS dialog Figure 4-1 EQQJOBS primary panel You only need to select options 1 and 2 for end-to-end specifications. We do not want to step through the whole EQQJOBS dialog so, instead, we show only the related end-to-end panels. (The referenced panel names are indicated in the top-left corner of the panels, as shown in Figure 4-1.) 2. Select option 1 in panel EQQJOBS0 (and press Enter twice), and make your necessary input into panel ID EQQJOBS8. (See Figure 4-2 on page 163.) 162 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 179. EQQJOBS8---------------------------- Create sample job JCL -------------------- Command ===> END TO END FEATURE: Y (Y= Yes ,N= No) Installation Directory ===> /usr/lpp/TWS/V8R2M0_____________________ ===> ________________________________________ ===> ________________________________________ Work Directory ===> /var/inst/TWS___________________________ ===> ________________________________________ ===> ________________________________________ User for OPC address space ===> UID ___ Refresh CP group ===> GID __ RESTART AND CLEANUP (DATA STORE) N (Y= Yes ,N= No) Reserved destination ===> OPC_____ Connection type ===> SNA (SNA/XCF) SNA Data Store luname ===> ________ (only for SNA connection ) SNA FN task luname ===> ________ (only for SNA connection ) Xcf Group ===> ________ (only for XCF connection ) Xcf Data store member ===> ________ (only for XCF connection ) Xcf FL task member ===> ________ (only for XCF connection ) Press ENTER to create sample job JCL Figure 4-2 Server-related input panel The following definitions are important: – END-TO-END FEATURE Specify Y if you want to install end-to-end scheduling and run jobs on Tivoli Workload Scheduler fault-tolerant agents. – Installation Directory Specify the (HFS) path where SMP/E has installed the Tivoli Workload Scheduler for z/OS files for UNIX system services that apply the End-to-End enabler feature. This directory is the one containing the bin directory. The default path is /usr/lpp/TWS/V8R2M0. The installation directory is created by SMP/E job EQQISMKD and populated by applying the end-to-end feature (JWSZ103). This should be mounted Read-Only on every system in your sysplex. – Work Directory Specify where the subsystem-specific files are. Replace with a name that uniquely identifies your subsystem. Each subsystem that will use the fault-tolerant workstations must have its own work directory. Only the server and the daily planning batch jobs update the work directory. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 163
  • 180. This directory is where the end-to-end processes have their working files (Symphony, event files, traces). It should be mounted Read/Write on every system in your sysplex. Important: To configure end-to-end scheduling in a sysplex environment successfully, make sure that the work directory is available to all systems in the sysplex. This way, in case of a takeover situation, the new server will be started on a new system in the sysplex, and the server must be able to access the work directory to continue processing. As described in Section 3.4.4, “Hierarchical File System (HFS) cluster” on page 124, we recommend having dedicated HFS clusters for each end-to-end scheduling environment (end-to-end server started task), that is: One HFS cluster for the installation binaries per environment (test, production, and so forth) One HFS cluster for the work files per environment (test, production and so forth) The work HFS clusters should be mounted in Read/Write mode and the HFS cluster with binaries should be mounted Read-Only. This is because the working directory is application-specific and contains application-related data. Besides, it makes your backup easier. The size of the cluster depends on the size of the Symphony file and how long you want to keep the stdlist files. We recommend that you allocate 2 GB of space. – User for OPC address space This information is used to create the EQQPCS05 sample job used to build the directory with the right ownership. In order to run the end-to-end feature correctly, the ownership of the work directory and the files contained in it must be assigned to the same user ID that RACF associates with the server started task. In the User for OPC address space field, specify the RACF user ID used for the Server address space. This is the name specified in the started-procedure table. – Refresh CP group This information is used to create the EQQPCS05 sample job used to build the directory with the right ownership. In order to create the new Symphony file, the user ID that is used to run the daily planning batch job must belong to the group that you specify in this field. Make sure that the user ID that is associated with the Server and Controller address spaces (the one specified in the User for OPC address space field) belongs to this group or has this group as a supplementary group. 164 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 181. As you can see in Figure 4-3 on page 165, we defined RACF user ID TWSCE2E to the end-to-end server started task. User TWSCE2E belongs to RACF group TWSGRP. Therefore, all users of the RACF group TWSGRP and its supplementary group get access to create the Symphony file and to modify and read other files in the work directory. Tip: The Refresh CP group field can be used to give access to the HFS file as well as to protect the HFS directory from unauthorized access. EQQJOBS8 ------------------- Create sample job JCL ------------ EQQJOBS8 ------------------- Create sample job JCL ------------ Command ===> Command ===> HFS Binary Directory end-to-end FEATURE: end-to-end FEATURE: Y Y (Y= Yes , N= No) (Y= Yes , N= No) Where the TWS binaries that run in USS were HFS Installation Directory ===> HFS Installation Directory ===> /usr/lpp/TWS/V8R2M0______________ installed. E.g., translator, mailman, and /usr/lpp/TWS/V8R2M0______________ ===> ===> ___________________________ ___________________________ batchman. This should be the same as the ===> ===> ___________________________ ___________________________ value of the TOPOLOGY BINDIR parameter. HFS Work Directory HFS Work Directory ===> ===> /var/inst/TWS_____________ /var/inst/TWS_____________ ===> ===> ___________________________ ___________________________ HFS Working Directory ===> ===> ___________________________ ___________________________ Where the TWS files that change throughout User for OPC Address Space ===> User for OPC Address Space ===> E2ESERV_ E2ESERV_ Refresh CP Group ===> TWSGRP__ the day will reside. E.g., Symphony, mailbox Refresh CP Group ===> TWSGRP__ files, and logs for the TWS processes that run in ... ... USS. This should be the same as the value of the TOPOLOGY WRKDIR parameter. EQQPCS05 sample JCL User for End-to-end Server Task The user associated with the end-to-end server //TWS JOB ,'TWS INSTALL',CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1) /*JOBPARM SYSAFF=SC64 started task. //JOBLIB DD DSN=TWS.V8R2M0.SEQQLMD0,DISP=SHR //ALLOHFS EXEC PGM=BPXBATCH,REGION=4M Group for Batch Planning Jobs //STDOUT DD PATH='/tmp/eqqpcs05out', The group containing all users who will run // PATHOPTS=(OCREAT,OTRUNC,OWRONLY),PATHMODE=SIRWXU //STDIN DD PATH='/usr/lpp/TWS/V8R2M0/bin/config', batch planning jobs (CP extend, replan, refresh, // PATHOPTS=(ORDONLY) and Symphony renew). //STDENV DD * eqqBINDIR=/usr/lpp/TWS/V8R2M0 eqqWRKDIR=/var/inst/TWS eqqUID=E2ESERV eqqGID=TWSGRP /* //* //OUTPUT1 EXEC PGM=IKJEFT01 //STDOUT DD SYSOUT=*,DCB=(RECFM=V,LRECL=256) //OUTPUT DD PATH='/tmp/eqqpcs05out', // PATHOPTS=ORDONLY //SYSTSPRT DD DUMMY //SYSTSIN DD * OCOPY INDD(OUTPUT) OUTDD(STDOUT) BPXBATCH SH rm /tmp/eqqpcs05out /* Figure 4-3 Description of the input fields in the EQQJOBS8 panel 3. Press Enter to generate the installation job control language (JCL) jobs. Table 4-1 lists the subset of the sample JCL members created by EQQJOBS that relate to end-to-end scheduling. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 165
  • 182. Table 4-1 Sample JCL members related to end-to-end scheduling (created by EQQJOBS) Member Description EQQCON Sample started task procedure for a Tivoli Workload Scheduler for z/OS controller and tracker in same address space. EQQCONO Sample started task procedure for the Tivoli Workload Scheduler for z/OS controller only. EQQCONP Sample initial parameters for a Tivoli Workload Scheduler for z/OS controller and tracker in same address space. EQQCONOP Sample initial parameters for a Tivoli Workload Scheduler for z/OS controller only. EQQPCS05 Creates the working directory in HFS used by the end-to-end server task. EQQPCS06 Allocates data sets necessary to run end-to-end schedulingend-to-end. EQQSER Sample started task procedure for a server task. EQQSERV Sample initialization parameters for a server task. 4. EQQJOBS is also used to create batch-job skeletons. That is, skeletons for the batch jobs (such as plan extend, replan, Symphony renew) that you can submit from Tivoli Workload Scheduler for z/OS legacy ISPF panels. To create batch-job skeletons, select option 2 in the EQQJOBS primary panel (see Figure 4-1 on page 162). Make your necessary entries until panel EQQJOBSA appears (Figure 4-4). 166 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 183. EQQJOBSA -------------- Generate OPC batch-job skeletons ---------------------- Command ===> Specify if you want to use the following optional features: END TO END FEATURE: Y (Y= Yes ,N= No) (To interoperate with TWS fault tolerant workstations) RESTART AND CLEAN UP (DATA STORE): N (Y= Yes ,N= No) (To be able to retrieve job log, execute dataset clean up actions and step restart) FORMATTED REPORT OF TRACKLOG EVENTS: Y (Y= Yes ,N= No) EQQTROUT dsname ===> TWS.V8R20.*.TRACKLOG____________________________ EQQAUDIT output dsn ===> TWS.V8R20.*.EQQAUDIT.REPORT_____________________ Press ENTER to generate OPC batch-job skeletons Figure 4-4 Generate end-to-end skeletons 5. Specify Y for the END-TO-END FEATURE if you want to use end-to-end scheduling to run jobs on Tivoli Workload Scheduler fault-tolerant workstations. 6. Press Enter and the skeleton members for daily plan extend, replan, trial plan, long-term plan extend, replan, and trial plan are created with data sets related to end-to-end scheduling. Also, a new member is created. (See Table 4-2 on page 167.) Table 4-2 End-to-end skeletons Member Description EQQSYRES Tivoli Workload Scheduler Symphony renew 4.2.2 Defining Tivoli Workload Scheduler for z/OS subsystems The subsystem for the Tivoli Workload Scheduler for z/OS controllers (engines) and trackers on the z/OS images (agents) must be defined in the active subsystem-name-table member of SYS1.PARMLIB. It is advisable to install at least two Tivoli Workload Scheduler for z/OS controlling systems, one for testing and one for your production environment. Note: We recommend that you install the trackers (agents) and the Tivoli Workload Scheduler for z/OS controller (engine) in separate address spaces. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 167
  • 184. To define the subsystems, update the active IEFSSNnn member in SYS1.PARMLIB. The name of the subsystem initialization module for Tivoli Workload Scheduler for z/OS is EQQINITF. Include records, as in the following example. Example 4-1 Subsystem definition record (IEFSSNnn member of SYS1.PARMLIB) SUBSYS SUBNAME(subsystem name) /* TWS for z/OS subsystem */ INITRTN(EQQINITF) INITPARM('maxecsa,F') Note that the subsystem name must be two to four characters: for example, TWSC for the controller subsystem and TWST for the tracker subsystems. Check the IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264, for more information. 4.2.3 Allocate end-to-end data sets Member EQQPCS06, created by EQQJOBS in your sample job JCL library, allocates the following VSAM and sequential data sets needed for end-to-end scheduling: End-to-end script library (EQQSCLIB) for non-centralized script End-to-end input and output events data sets (EQQTWSIN and EQQTWSOU) Current plan backup copy data set to create Symphony (EQQSCPDS) End-to-end centralized script data library (EQQTWSCS) We explain the use and allocation of these data sets in more detail. End-to-end script library (EQQSCLIB) This script library data set includes members containing the commands or the job definitions for fault-tolerant workstations. It is required in the controller if you want to use the end-to-end scheduling feature. See Section 4.5.3, “Definition of non-centralized scripts” on page 221 for details about the JOBREC, RECOVERY, and VARSUB statements. Tip: Do not compress members in this PDS. For example, do not use the ISPF PACK ON command, because Tivoli Workload Scheduler for z/OS does not use ISPF services to read it. End-to-end input and output data sets These data sets are required by every Tivoli Workload Scheduler for z/OS address space that uses the end-to-end feature. They record the descriptions of 168 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 185. related events with operations running on FTWs and are used by both the end-to-end enabler task and the translator process in the scheduler’s server. The data sets are device-dependent and can have only primary space allocation. Do not allocate any secondary space. They are automatically formatted by Tivoli Workload Scheduler for z/OS the first time they are used. Note An SD37 abend code is produced when Tivoli Workload Scheduler for z/OS formats a newly allocated data set. Ignore this error. EQQTWSIN and EQQTWSOU are wrap-around data sets. In each data set, the header record is used to track the amount of read and write records. To avoid the loss of event records, a writer task will not write any new records until more space is available when all existing records have been read. The quantity of space that you need to define for each data set requires some attention. Because the two data sets are also used for job log retrieval, the limit for the job log length is half the maximum number of records that can be stored in the input events data set. Two cylinders are sufficient for most installations. The maximum length of the events logged in those two data sets, including the job logs, is 120 bytes. Anyway, it is possible to allocate the data sets with a longer logical record length. Using record lengths greater than 120 bytes does not produce either advantages or problems. The maximum allowed value is 32000 bytes; greater values will cause the end-to-end server started task to terminate. In both data sets there must be enough space for at least 1000 events. (The maximum number of job log events is 500.) Use this as a reference if you plan to define a record length greater than 120 bytes. So, when the record length of 120 bytes is used, the space allocation must be at least 1 cylinder. The data sets must be unblocked and the block size must be the same as the logical record length. A minimum record length of 160 bytes is necessary for the EQQTWSOU data set in order to be able to decide how to build the job name in the Symphony file. (Refer to the TWSJOBNAME parameter in the JTOPTS statement in Section 4.2.9, “The JTOPTS TWSJOBNAME() parameter” on page 200.) For good performance, define the data sets on a device with plenty of availability. If you run programs that use the RESERVE macro, try to allocate the data sets on a device that is not, or only slightly, reserved. Initially, you may need to test your system to get an idea of the number and types of events that are created at your installation. After you have gathered enough information, you can reallocate the data sets. Before you reallocate a data set, ensure that the current plan is entirely up-to-date. You must also stop the Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 169
  • 186. end-to-end sender and receiver task on the controller and the translator thread on the server that add this data set. Tip: Do not move these data sets after they have been allocated. They contain device-dependent information and cannot be copied from one type of device to another, or moved around on the same volume. An end-to-end event data set that is moved will be re-initialized. This causes all events in the data set to be lost. If you have DFHSM or a similar product installed, you should specify that end-to-end event data sets are not migrated or moved. Current plan backup copy data set (EQQSCPDS) EQQSCPDS is the current plan backup copy data set that is used to create the Symphony file. During the creation of the current plan, the SCP data set is used as a CP backup copy for the production of the Symphony file. This VSAM is used when the end-to-end feature is active. It should be allocated with the same size as the CP1/CP2 and NCP VSAM data sets. End-to-end centralized script data set (EQQTWSCS) Tivoli Workload Scheduler for z/OS uses the end-to-end centralized script data set to temporarily store a script when it is downloaded from the JOBLIB data set to the agent for its submission. Set the following attributes for EQQTWSCS: DSNTYPE=LIBRARY, SPACE=(CYL,(1,1,10)), DCB=(RECFM=FB,LRECL=80,BLKSIZE=3120) If you want to use centralized script support when scheduling end-to-end, use the EQQTWSCS DD statement in the controller and server started tasks. The data set must be a partitioned extended-data set. 4.2.4 Create and customize the work directory To install the end-to-end feature, you must allocate the files that the feature uses. Then, on every Tivoli Workload Scheduler for z/OS controller that will use this feature, run the EQQPCS05 sample to create the directories and files. The EQQPCS05 sample must be run by a user with one of the following permissions: UNIX System Services (USS) user ID (UID) equal to 0 BPX.SUPERUSER FACILITY class profile in RACF 170 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 187. UID specified in the JCL in eqqUID and belonging to the group (GID) specified in the JCL in eqqGID If the GID or the UID has not been specified in EQQJOBS, you can specify them in the STDENV DD before running the EQQPCS05. The EQQPCS05 job runs a configuration script (named config) residing in the installation directory. This configuration script creates a working directory with the right permissions. It also creates several files and directories in this working directory. (See Figure 4-5 on page 171.) z/OS EQQPCS05 must be run as: EQQPCS05 must be run as: Sample JCL for • a user associated with USS UID 0; or • a user associated with USS UID 0; or installation of • • a user with the BPX.SUPERUSER a user with the BPX.SUPERUSER End-to-end feature facility in RACF; or facility in RACF; or • • the user that will be specified in eqqUID the user that will be specified in eqqUID EQQPCS05 EQQPCS05 (the user associated with the end-to-end (the user associated with the end-to-end server started task) server started task) USS BINDIR WRKDIR Permissions Owner Group Size Date Time File Name__________ config config -rw-rw---- 1 E2ESERV TWSGRP 755 Feb 3 13:01 NetConf -rw-rw---- 1 E2ESERV TWSGRP 1122 Feb 3 13:01 TWSCCLog.properties -rw-rw---- 1 E2ESERV TWSGRP 2746 Feb 3 13:01 localopts configure configure drwxrwx--- 2 E2ESERV TWSGRP 8192 Feb 3 13:01 mozart drwxrwx--- 2 E2ESERV TWSGRP 8192 Feb 3 13:01 pobox drwxrwxr-x 3 E2ESERV TWSGRP 8192 Feb 11 09:48 stdlist The configure script creates subdirectories; copies configuration files; and sets The configure script creates subdirectories; copies configuration files; and sets the owner, group, and permissions of these directories and files. This last step the owner, group, and permissions of these directories and files. This last step is the reason EQQPCS05 must be run as a user with sufficient priviliges. is the reason EQQPCS05 must be run as a user with sufficient priviliges. Figure 4-5 EQQPCS05 sample JCL and the configure script After running EQQPCS05, you can find the following files in the work directory: localopts Defines the attributes of the local workstation (OPCMASTER) for batchman, mailman, netman, and writer processes and for SSL. Only a subset of these attributes is used by the end-to-end server on z/OS. Refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265, for information about customizing this file. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 171
  • 188. mozart/globalopts Defines the attributes of the IBM Tivoli Workload Scheduler network (OPCMASTER ignores them). Netconf Netman configuration files TWSCCLOG.properties Defines attributes for trace function in the end-to-end server USS. You will also find the following directories in the work directory: mozart pobox stdlist stdlist/logs (contains the log files for USS processes) Do not touch or delete any of these files or directories, which are created in the work directory by the EQQPCS05 job, unless you are directed to do so, for example in error situations. Tip: If you execute this job in a sysplex that cannot share the HFS (prior OS/390 V2R9) and get messages like cannot create directory, you may want a closer look on which machine the job really ran. Because without system affinity, every member that has the initiater in the right class started can execute the job so you must add a /*JOBPARM SYSAFF to make sure that the job runs on the system where the work HFS is mounted. Note that the EQQPCS05 job does not define the physical HFS (or z/OS) data set. The EQQPCS05 initiates an existing HFS data set with the necessary files and directories for the end-to-end server started task. The physical HFS data set can be created with a job that contains an IEFBR14 step, as shown in Example 4-2. Example 4-2 HFS data set creation //USERHFS EXEC PGM=IEFBR14 //D1 DD DISP=(,CATLG),DSNTYPE=HFS, // SPACE=(CYL,(prispace,secspace,1)), // DSN=OMVS.TWS820.TWSCE2E.HFS Allocate the HFS work data set with enough space for your end-to-end server started task. In most installations, 2 GB disk space is enough. 172 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 189. 4.2.5 Create started task procedures for Tivoli Workload Scheduler for z/OS Perform this task for a Tivoli Workload Scheduler for z/OS tracker (agent), controller (engine), and server started task. You must define a started task procedure or batch job for each Tivoli Workload Scheduler for z/OS address space. The EQQJOBS dialog generates several members in the output sample library that you specified when running the EQQJOBS installation aid program. These members contain started task JCL that is tailored with the values you entered in the EQQJOBS dialog. Tailor these members further, according to the data sets you require. See Figure 4-1 on page 166. Because the end-to-end server started task uses TCP/IP communication, you should do the following: First, you have to modify the JCL of EQQSER in the following way: Make sure that the end-to-end server started task has access to the C runtime libraries, either as STEPLIB (include the CEE.SCEERUN in the STEPLIB concatenation) or by LINKLIST (the CEE.SCEERUN is in the LINKLIST concatenation). If you have multiple TCP/IP stacks, or if the name you used for the procedure that started up the TCPIP address space was not the default (TCPIP), change the end-to-end server started task procedure to include the SYSTCPD DD card to point to a data set containing the TCPIPJOBNAME parameter. The standard method to determine the connecting TCP/IP image is: – Connect to the TCP/IP specified by TCPIPJOBNAME in the active TCPIP.DATA. – Locate TCPIP.DATA using the SYSTCPD DD card. You can also use the end-to-end server TOPOLOGY TCPIPJOBNAME() parameter to specify the TCP/IP started task name that is used by the end-to-end server. This parameter can be used if you have multiple TCP/IP stacks or if the TCP/IP started task name is different form TCPIP. You must have a server started task to handle end-to-end scheduling. You can use the same server also to communicate with the Job Scheduling Console. In fact, the server can also handle APPC communication if configured to this. In Tivoli Workload Scheduler for z/OS 8.2, the type of communication that should be handled by the server started task is defined in the new SERVOPTS PROTOCOL() parameter. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 173
  • 190. In the PROTOCOL() parameter, you can specify any combination of: APPC: The server should handle APPC communication. JSC: The server should handle JSC communication. E2E: The server should handle end-to-end communication. Recommendations: The Tivoli Workload Scheduler for z/OS controller and end-to-end server use TCP/IP services. Therefore it is necessary to define a USS segment for the controller and end-to-end server started task userids. No special authorization necessary; it is only required to be defined in USS with any user ID. Even though it is possible to have one server started task handling end-to-end scheduling, JSC communication, and even APPC communication as well, we recommend having a server started task dedicated for end-to-end scheduling (SERVOPTS PROTOCOL(E2E)). This has the advantage that you do not have to stop the whole server processes if the JSC server must be restarted. The server started task is important for handling JSC and end-to-end communication. We recommend setting the end-to-end and JSC server started tasks as non-swappable and giving it at least the same dispatching priority as the Tivoli Workload Scheduler for z/OS controller (engine). The Tivoli Workload Scheduler for z/OS controller uses the end-to-end server to communicate events to the FTAs. The end-to-end server will start multiple tasks and processes using the UNIX System Services. 4.2.6 Initialization statements for Tivoli Workload Scheduler for z/OS end-to-end scheduling Initialization statements for end-to-end scheduling fit into two categories: 1. Statements used to configure the Tivoli Workload Scheduler for z/OS controller (engine) and end-to-end server: a. OPCOPTS and TPLGYPRM statements for the controller b. SERVOPTS statement for the end-to-end server 2. Statements used to define the end-to-end topology (the network topology for the distributed Tivoli Workload Scheduler network). The end-to-end topology statements fall into two categories: a. Topology statements used to initialize the end-to-end server environment in USS on the mainframe: • The TOPOLOGY statement 174 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 191. b. Statements used to describe the distributed Tivoli Workload Scheduler network and the responsibilities for the different Tivoli Workload Scheduler agents in this network: • The DOMREC, CPUREC, and USRREC statements These statements are used by the end-to-end server and the plan extend, plan replan, and Symphony renew batch jobs. The batch jobs use the information when the Symphony file is created. See “Initialization statements used to describe the topology” on page 184. We go through each initialization statement in detail and give you an example of how a distributed Tivoli Workload Scheduler network can be reflected in Tivoli Workload Scheduler for z/OS using the topology statements. Table 4-3 Initialization members related to end-to-end scheduling Initialization member Description TPLGYSRV Activates end-to-end in the Tivoli Workload Scheduler for z/OS controller. TPLGYPRM Activates end-to-end in the Tivoli Workload Scheduler for server and batch jobs (plan jobs). TOPOLOGY Specifies all the statements for end-to-end. DOMREC Defines domains in a distributed Tivoli Workload Scheduler network. CPUREC Defines agents in a Tivoli Workload Scheduler distributed network. USRREC Specifies user ID and password for NT users. You can find more information in Tivoli Workload Scheduler for z/OS Customization and Tuning, SH19-4544. Figure 4-6 on page 176 illustrates the relationship between the initialization statements and members related to end-to-end scheduling. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 175
  • 192. OPC Controller Note: It is possible to run many Daily Planning Batch Jobs TWSC servers, but only one server can be (CPE, LTPE, etc.) OPCOPTS the end-to-end server (also called the BATCHOPT TPLGYSRV(TWSCE2E) topology server). Specify this server ... using the TPLGYSRV controller SERVERS(TWSCJSC,TWSCE2E) option. The SERVERS option TPLGYPRM(TPLGPARM) ... specifies the servers that will be ... started when the controller starts. JSC Server End-to-end Server TWSCJSC TWSCE2E Topology Records SERVOPTS SERVOPTS SUBSYS(TWSCC) SUBSYS(TWSC) EQQPARM(TPLGINFO) PROTOCOL(JSC) PROTOCOL(E2E) DOMREC ... CODEPAGE(500) TPLGYPRM(TPLGPARM) DOMREC ... JSCHOSTNAME(TWSCJSC) ... CPUREC ... PORTNUMBER(42581) CPUREC ... Topology Parameters USERMAP(USERMAP) CPUREC ... ... EQQPARM(TPLGPARM) CPUREC ... User Map TOPOLOGY ... BINDIR(/tws) EQQPARM(USERMAP) WRKDIR(/tws/wrkdir) User Records USER 'ROOT@M-REGION' HOSTNAME(TWSC.IBM.COM) EQQPARM(USRINFO) RACFUSER(TMF) PORTNUMBER(31182) USRREC ... RACFGROUP(TIVOLI) TPLGYMEM(TPLGINFO) USRREC ... ... USRMEM(USERINFO) USRREC ... TRCDAYS(30) ... LOGLINES(100) If you plan to use Job Scheduling Console to work with OPC, it is a good idea to run two separate servers: one for JSC connections (JSCSERV), and another for the connection with the TWS network (E2ESERV). Figure 4-6 Relationship between end-to-end initialization statements and members In the following sections, we cover the different initialization statements and members and describe their meaning and usage one by one. Refer to Figure 4-6 when reading these sections. OPCOPTS TPLGYSRV(server_name) Specify this keyword if you want to activate the end-to-end feature in the Tivoli Workload Scheduler for z/OS (OPC) controller (engine). Activates the end-to-end feature in the controller. If you specify this keyword, the IBM Tivoli Workload Scheduler Enabler task is started. The specified server_name is that of the end-to-end server that handles the events to and from the FTAs. Only one server can handle events to and from the FTAs. This keyword is defined in OPCOPTS. 176 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 193. Tip: If you want let the Tivoli Workload Scheduler for z/OS controller start and stop the end-to-end server, use the servers keyword in OPCOPTS parmlib member (see Figure 4-6 on page 176) SERVOPTS TPLGYPRM(member name/TPLGPARM) The SERVOPTS statement is the first statement read by the end-to-end server started task. In the SERVOPTS, you specify different initialization options for the server started task, such as: The name of the Tivoli Workload Scheduler for z/OS controller that the server should communicate with (serve). The name is specified with the SUBSYS() keyword. The type of protocol. The PROTOCOL() keyword is used to specify the type of communication used by the server. In Tivoli Workload Scheduler for z/OS 8.2, you can specify any combination of the following values separated by comma: E2E, JSC, APPC. Note: With Tivoli Workload Scheduler for z/OS 8.2, the TCPIP value has been replaced by the combination of the E2E and JSC values, but the TCPIP value is still allowed for backward compatibility. The TPLGYPRM() parameter is used to define the member name of the member in parmlib with the TOPOLOGY definitions for the distributed Tivoli Workload Scheduler network. The TPLGYPRM() parameter must be specified if PROTOCOL(E2E) is specified. See Figure 4-6 on page 176 for an example of the required SERVOPTS parameters for an end-to-end server (TWSCE2E in Figure 4-6 on page 176). TPLGYPRM(member name/TPLGPARM) in BATCHOPT It is important to remember to add the TPLGYPRM() parameter to the BATCHOPT initialization statement that is used by the Tivoli Workload Scheduler for z/OS planning jobs (trial plan extend, plan extend, plan replan) and Symphony renew. If the TPLGYPRM() parameter is not specified in the BATCHOP initialization statement that is used by the plan jobs, no Symphony file will be created and no jobs will run in the distributed Tivoli Workload Scheduler network. See Figure 4-6 on page 176 for an example of how to specify the TPLGYPRM() parameter in the BATCHOPT initialization statement. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 177
  • 194. Note: The topology definitions in TPLGYPRM() in the BATCHOPT initialization statement is read and verified by the trial plan extend job in Tivoli Workload Scheduler for z/OS. This means that the trial plan extend job can be used to verify the TOPLOGY definitions such as DOMREC, CPUREC, and USRREC for syntax errors or logical errors before the plan extend or plan replan job is executed. Also note that the trial plan extend job does not create a new Symphony file because it does not update the current plan in Tivoli Workload Scheduler for z/OS. TOPOLOGY statement This statement includes all of the parameters that are related to the end-to-end feature. TOPOLOGY is defined in the member of the EQQPARM library as specified by the TPLGYPRM parameter in the BATCHOPT and SERVOPTS statements. Figure 4-8 on page 185 shows the syntax of the topology member. 178 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 195. Figure 4-7 The statements that can be specified in the topology member Description of the topology statements The topology parameters are described in the following sections. BINDIR(directory name) Specifies the name of the base file system (HFS or zOS) directory where binaries, catalogs, and the other files are installed and shared among subsystems. The specified directory must be the same as the directory where the binaries are, without the final bin. For example, if the binaries are installed in /usr/lpp/TWS/V8R2M0/bin and the catalogs are in Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 179
  • 196. /usr/lpp/TWS/V8R2M0/catalog/C, the directory must be specified in the BINDIR keyword as follows: /usr/lpp/TWS/V8R2M0. CODEPAGE(host system codepage/IBM-037) Specifies the name of the host code page and applies to the end-to-end feature. The value is used by the input translator to convert data received from Tivoli Workload Scheduler domain managers at the first level from UTF-8 format to EBCIDIC format. You can provide the IBM – xxx value, where xxx is the EBCDIC code page. The default value, IBM – 037, defines the EBCDIC code page for US English, Portuguese, and Canadian French. For a complete list of available code pages, refer to Tivoli Workload Scheduler for z/OS Customization and Tuning, SH19-4544. ENABLELISTSECCHK(YES/NO) This security option controls the ability to list objects in the plan on an FTA using conman and the Job Scheduling Console. Put simply, this option determines whether conman and the Tivoli Workload Scheduler connector programs will check the Tivoli Workload Scheduler Security file before allowing the user to list objects in the plan. If set to YES, objects in the plan are shown to the user only if the user has been granted the list permission in the Security file. If set to NO, all users will be able to list objects in the plan on FTAs, regardless of whether list access is granted in the Security file. The default value is NO. Change the value to YES if you want to check for the list permission in the security file. GRANTLOGONASBATCH(YES/NO) This is only for jobs running on Windows platforms. If set to YES, the logon users for Windows jobs are automatically granted the right to log on as batch job. If set to NO or omitted, the right must be granted manually to each user or group. The right cannot be granted automatically for users running jobs on a backup domain controller, so you must grant those rights manually. HOSTNAME(host name /IP address/ local host name) Specifies the host name or the IP address used by the server in the end-to-end environment. The default is the host name returned by the operating system. If you change the value, you also must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. As described in Section 3.4.6, “TCP/IP considerations for end-to-end server in sysplex” on page 129, you can define a virtual IP address for each server of the active controller and the standby controllers. If you use a dynamic virtual IP address in a sysplex environment, when the active controller fails and the 180 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 197. standby controller takes over the communication, the FTAs automatically switch the communication to the server of the standby controller. To change the HOSTNAME of a server, perform the following actions: 1. Set the nm ipvalidate keyword to off in the localopts file on the first-level domain managers. 2. Change the HOSTNAME value of the server using the TOPOLOGY statement. 3. Restart the server with the new HOSTNAME value. 4. Renew the Symphony file. 5. If the renewal ends successfully, you can set the ipvalidate to full on the first-level domain managers. See 3.4.6, “TCP/IP considerations for end-to-end server in sysplex” on page 129 for a description of how to define DVIPA IP address LOGLINES(number of lines/100) Specifies the maximum number of lines that the job log retriever returns for a single job log. The default value is 100. In all cases, the job log retriever does not return more than half of the number of records that exist in the input queue. If the job log retriever does not return all of the job log lines because there are more lines than the LOGLINES() number of lines, a notice similar to this appears in the retrieved job log output: *** nnn lines have been discarded. Final part of Joblog ... ****** The line specifies the number (nnn) of job log lines not displayed, between the first lines and the last lines of the job log. NOPTIMEDEPENDENCY(YES/NO) With this option, you can change the behavior of noped operations that are defined on fault-tolerant workstations and have the centralized script option set to N. In fact, Tivoli Workload Scheduler for z/OS completes the noped operations without waiting for the time dependency resolution: With this option set to YES, the operation can be completed in the current plan after the time dependency has been resolved. The default value is NO. Note: This statement is introduced by APAR PQ84233. PLANAUDITLEVEL(0/1) Enables or disables plan auditing for FTAs. Each Tivoli Workload Scheduler workstation maintains its own log. Valid values are 0 to disable plan auditing and Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 181
  • 198. 1 to activate plan auditing. Auditing information is logged to a flat file in the TWShome/audit/plan directory. Only actions, not the success or failure of any action are logged in the auditing file. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. PORTNUMBER(port/31111) Defines the TCP/IP port number that is used by the server to communicate with the FTAs. This value has to be different from that specified in the SERVOPTS member. The default value is 31111, and accepted values are from 0 to 65535. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. Important: The port number must be unique within a Tivoli Workload Scheduler network. SSLLEVEL(ON/OFF/ENABLED/FORCE) Defines the type of SSL authentication for the end-to-end server (OPCMASTER workstation). It must have one of the following values: ON The server uses SSL authentication only if another workstation requires it. OFF (default value) The server does not support SSL authentication for its connections. ENABLED The server uses SSL authentication only if another workstation requires it. FORCE The server uses SSL authentication for all of its connections. It refuses any incoming connection if it is not SSL. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. SSLPORT(SSL port number/31113) Defines the port used to listen for incoming SSL connections on the server. It substitutes the value of nm SSL port in the localopts file, activating SSL support on the server. If SSLLEVEL is specified and SSLPORT is missing, 31113 is used as the default value. If SSLLEVEL is not specified, the default value of this parameter is 0 on the server, which indicates that no SSL authentication is required. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. 182 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 199. TCPIPJOBNAME(TCP/IP started-task name/TCPIP) Specifies the TCP/IP started-task name used by the server. Set this keyword when you have multiple TCP/IP stacks or a TCP/IP started task with a name different from TCPIP. You can specify a name from one to eight alphanumeric or national characters, where the first character is alphabetic or national. TPLGYMEM(member name/TPLGINFO) Specifies the PARMLIB member where the domain (DOMREC) and workstation (CPUREC) definition specific to the end-to-end are. The default value is TPLGINFO. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. TRCDAYS(days/14) Specifies the number of days the trace files and file in the stdlist directory are kept before being deleted. Every day the USS code creates the new stdlist directory to contain the logs for the day. All log directories that are older than the number of days specified in TRCDAYS() are deleted automatically. The default value is 14. Specify 0 if you do not want the trace files to be deleted. Recommendation: Monitor the size of your working directory (that is, the size of the HFS cluster with work files) to prevent the HFS cluster from becoming full. The trace files and files in the stdlist directory contain internal logging information and Tivoli Workload Scheduler messages that may be useful for troubleshooting. You should consider deleting them on a regular interval using the TRCDAYS() parameter. USRMEM(member name/USRINFO) Specifies the PARMLIB member where the user definitions are. This keyword is optional except if you are going to schedule jobs on Windows operating systems, in which case, it is required. The default value is USRINFO. If you change the value, you must restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. WRKDIR(directory name) Specifies the location of the working files for an end-to-end server started task. Each Tivoli Workload Scheduler for z/OS end-to-end server must have its own WRKDIR. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 183
  • 200. ENABLESWITCHFT(Y/N) New parameter (not shown in Figure 4-7 on page 179) that was introduced in FixPack 04 for Tivoli Workload Scheduler and APAR PQ81120 for Tivoli Workload Scheduler. It is used to activated the enhanced fault-tolerant mechanism on domain managers. The default is N, meaning that the enhanced fault-tolerant mechanism is not activated. For more information, check the documentation in the FaultTolerantSwitch.README.pdf file delivered with FixPack 04 for Tivoli Workload Scheduler. 4.2.7 Initialization statements used to describe the topology With the last three parameters listed in Table 4-3 on page 175, DOMREC, CPUREC, and USRREC, you define the topology of the distributed Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS. The defined topology is used by the plan extend, replan, and Symphony renew batch jobs when creating the Symphony file for the distributed Tivoli Workload Scheduler network. Figure 4-8 on page 185 shows how the distributed Tivoli Workload Scheduler topology is described using CPUREC and DOMREC initialization statements for the Tivoli Workload Scheduler for z/OS server and plan programs. The Tivoli Workload Scheduler for z/OS fault-tolerant workstations are mapped to physical Tivoli Workload Scheduler agents or workstations using the CPUREC statement. The DOMREC statement is used to describe the domain topology in the distributed Tivoli Workload Scheduler network. Note that the MASTERDM domain is predefined in Tivoli Workload Scheduler for z/OS. It is not necessary to specify a DOMREC parameter for the MASTERDM domain. Also note that the USRREC parameters are not depicted in Figure 4-8 on page 185. 184 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 201. Figure 4-8 The topology definitions for server and plan programs In the following sections, we walk through the DOMREC, CPUREC, and USRREC statements. DOMREC statement This statement begins a domain definition. You must specify one DOMREC for each domain in the Tivoli Workload Scheduler network, with the exception of the master domain. The domain name used for the master domain is MASTERDM. The master domain consists of the controller, which acts as the master domain manager. The CPU name used for the master domain manager is OPCMASTER. You must specify at least one domain, child of MASTERDM, where the domain manager is a fault-tolerant agent. If you do not define this domain, Tivoli Workload Scheduler for z/OS tries to find a domain definition that can function as a child of the master domain. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 185
  • 202. DOMRECs in topology member MASTERDM OPCMASTER EQQPARM(TPLGINFO) EQQSCLIB(MYJOB) DOMREC DOMAIN(DOMAINA) DomainA DomainB DOMMNGR(A000) DOMPARENT(MASTERDM) A000 B000 Symphony DOMREC DOMAIN(DOMAINB) DOMMNGR(B000) DOMPARENT(MASTERDM) A001 A002 B001 B002 ... OPC doesn’t have a built-in place to store information about TWS domains. Domains and their relationships are defined in DOMRECs. There is no DOMREC for the Master Domain, MASTERDM. DOMRECs are used to add information about TWS domains to the Symphony file. Figure 4-9 Example of two DOMREC statements for a network with two domains DOMREC is defined in the member of the EQQPARM library that is specified by the TPLGYMEM keyword in the TOPOLOGY statement (see Figure 4-6 on page 176 and Figure 4-9). Figure 4-10 illustrates the DOMREC syntax. Figure 4-10 Syntax for the DOMREC statement DOMAIN(domain name) The name of the domain, consisting of up to 16 characters starting with a letter. It can contain dashes and underscores. 186 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 203. DOMMNGR(domain manager name) The Tivoli Workload Scheduler workstation name of the domain manager. It must be a fault-tolerant agent running in full status mode. DOMPARENT(parent domain) The name of the parent domain. CPUREC statement This statement begins a Tivoli Workload Scheduler workstation (CPU) definition. You must specify one CPUREC for each workstation in the Tivoli Workload Scheduler network, with the exception of the controller that acts as master domain manager. You must provide a definition for each workstation of Tivoli Workload Scheduler for z/OS that is defined in the database as a Tivoli Workload Scheduler fault-tolerant workstation. CPUREC is defined in the member of the EQQPARM library that is specified by the TPLGYMEM keyword in the TOPOLOGY statement (see Figure 4-6 on page 176 and Figure 4-11 on page 188). Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 187
  • 204. Workstations CPURECs in topology member in OPC EQQPARM(TPLGINFO) MASTERDM A000 EQQSCLIB(MYJOB) OPCMASTER CPUREC CPUNAME(A000) B000 CPUOS(AIX) A001 CPUNODE(stockholm) DomainA DomainB A002 CPUTCPIP(31281) Symphony A000 B000 B001 CPUDOMAIN(DomainA) B002 CPUTYPE(FTA) ... CPUAUTOLINK(ON) CPUFULLSTAT(ON) A001 A002 B001 B002 CPURESDEP(ON) CPULIMIT(20) CPUTZ(ECT) CPUUSER(root) OPC does not have fields to CPUREC CPUNAME(A001) contain the extra information in a CPUOS(WNT) TWS workstation definition; OPC CPUNODE(copenhagen) CPUDOMAIN(DOMAINA) Valid CPUOS workstations marked fault tolerant CPUTYPE(FTA) values: must also have a CPUREC. The CPUAUTOLINK(ON) AIX workstation name in OPC acts as CPULIMIT(10) HPUX CPUTZ(ECT) POSIX a pointer to the CPUREC. CPUUSER(Administrator) UNIX There is no CPUREC for the FIREWALL(Y) WNT Master Domain manager, SSLLEVEL(FORCE) OTHER SSLPORT(31281) OPCMASTER. ... CPURECs are used to add information about DMs & FTAs to the Symphony file. Figure 4-11 Example of two CPUREC statements for two workstations Figure 4-12 on page 189 illustrates the CPUREC syntax. 188 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 205. Figure 4-12 Syntax for the CPUREC statement CPUNAME(cpu name) The name of the Tivoli Workload Scheduler workstation, consisting of up to four alphanumerical characters, starting with a letter. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 189
  • 206. CPUOS(operating system) The host CPU operating system related to the Tivoli Workload Scheduler workstation. The valid entries are AIX, HPUX, POSIX, UNIX, WNT, and OTHER. CPUNODE(node name) The node name or the IP address of the CPU. Fully-qualified domain names up to 52 characters are accepted. CPUTCPIP(port number/ 31111) The TCP port number of netman on this CPU. It comprises five numbers and, if omitted, uses the default value, 31111. CPUDOMAIN(domain name) The name of the Tivoli Workload Scheduler domain of the CPU. CPUHOST(cpu name) The name of the host CPU of the agent. It is required for standard and extended agents. The host is the Tivoli Workload Scheduler CPU with which the standard or extended agent communicates and where its access method resides. Note: The host cannot be another standard or extended agent. CPUACCESS(access method) The name of the access method. It is valid for extended agents and must be the name of a file that resides in the Tivoli Workload Scheduler <home>/methods directory on the host CPU of the agent. CPUTYPE(SAGENT/ XAGENT/ FTA) The CPU type specified as one of the following: FTA (default) Fault-tolerant agent, including domain managers and backup domain managers. SAGENT Standard agent XAGENT Extended agent Note: If the extended-agent workstation is manually set to Link, Unlink, Active, or Offline, the command is sent to its host CPU. CPUAUTOLNK(OFF/ON) Autolink is most effective during the initial start-up sequence of each plan. Then a new Symphony file is created and all workstations are stopped and restarted. 190 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 207. For a fault-tolerant agent or standard agent, specify ON so that, when the domain manager starts, it sends the new production control file (Symphony) to start the agent and open communication with it. For the domain manager, specify On so that when the agents start they open communication with the domain manager. Specify OFF to initialize an agent when you submit a link command manually from the Tivoli Workload Scheduler for z/OS Modify Current Plan ISPF dialogs or from the Job Scheduling Console. Note: If the X-agent workstation is manually set to Link, Unlink, Active, or Offline, the command is sent to its host CPU. CPUFULLSTAT(ON/OFF) This applies only to fault-tolerant agents. If you specify OFF for a domain manager, the value is forced to ON. Specify ON for the link from the domain manager to operate in Full Status mode. In this mode, the agent is kept updated about the status of jobs and job streams that are running on other workstations in the network. Specify OFF for the agent to receive status information only about the jobs and schedules on other workstations that affect its own jobs and schedules. This can improve the performance by reducing network traffic. To keep the production control file for an agent at the same level of detail as its domain manager, set CPUFULLSTAT and CPURESDEP (see below) to ON. Always set these modes to ON for backup domain managers. You should also be aware of the new TOPOLOGY ENABLESWITCHFT() parameter described in “ENABLESWITCHFT(Y/N)” on page 184. CPURESDEP(ON/OFF) This applies only to fault-tolerant agents. If you specify OFF for a domain manager, the value is forced to ON. Specify ON to have the agent’s production control process operate in Resolve All Dependencies mode. In this mode, the agent tracks dependencies for all of its jobs and schedules, including those running on other CPUs. Note: CPUFULLSTAT must also be ON so that the agent is informed about the activity on other workstations. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 191
  • 208. Specify OFF if you want the agent to track dependencies only for its own jobs and schedules. This reduces CPU usage by limiting processing overhead. To keep the production control file for an agent at the same level of detail as its domain manager, set CPUFULLSTAT and CPURESDEP to ON. Always set these modes to ON for backup domain managers. You should also be aware of the new TOPOLOGY ENABLESWITCHFT() parameter that is described in “ENABLESWITCHFT(Y/N)” on page 184. CPUSERVER(server ID) This applies only to fault-tolerant and standard agents. Omit this option for domain managers. This keyword can be a letter or a number (A-Z or 0-9) and identifies a server (mailman) process on the domain manager that sends messages to the agent. The IDs are unique to each domain manager, so you can use the same IDs for agents in different domains without conflict. If more than 36 server IDs are required in a domain, consider dividing it into two or more domains. If a server ID is not specified, messages to a fault-tolerant or standard agent are handled by a single mailman process on the domain manager. Entering a server ID causes the domain manager to create an additional mailman process. The same server ID can be used for multiple agents. The use of servers reduces the time that is required to initialize agents and generally improves the timeliness of messages. Notes on multiple mailman processes: When setting up multiple mailman processes, do not forget that each mailman server process uses extra CPU resources on the workstation on which it is created, so be careful not to create excessive mailman processes on low-end domain managers. In most of the cases, using extra domain managers is a better choice than configuring extra mailman processes. Cases in which use of extra mailman processes might be beneficial include: – Important FTAs that run mission critical jobs. – Slow-initializing FTAs that are at the other end of a slow link. (If you have more than a couple of workstations over a slow link connection to the OPCMASTER, a better idea is to place a remote domain manager to serve these workstations.) If you have unstable workstations in the network, do not put them under the same mailman server ID with your critical servers. 192 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 209. See Figure 4-13 for an example of CPUSERVER() use. The figure shows that one mailman process on domain manager FDMA has to handle all outbound communication with the five FTAs (FTA1 to FTA5) if these workstations (CPUs) are defined without the CPUSERVER() parameter. If FTA1 and FTA2 are defined with CPUSERVER(A), and FTA3 and FTA4 are defined with CPUSERVER(1), the domain manager FDMA will start two new mailman processes for these two server IDs (A and 1). parent domain manager • No Server IDs DomainA AIX Domain The main mailman Manager FDMA process on DMA mailman mailman handles all outbound communications with the FTAs in the domain. FTA1 FTA2 FTA3 FTA4 FTA5 Linux Solaris Windows 2000 HPUX OS/400 No Server ID No Server ID No Server ID No Server ID No Server ID parent domain manager DomainA AIX • 2 Different Server IDs Domain Manager An extra mailman FDMA process is spawned for SERVERA SERVERA mailman mailman SERVER1 SERVER1 mailman mailman mailman mailman each server ID in the domain. FTA1 FTA2 FTA3 FTA4 FTA5 Linux Solaris Windows 2000 HPUX OS/400 Server ID A Server ID A Server ID 1 Server ID 1 No Server ID Figure 4-13 Usage of CPUSERVER() IDs to start extra mailman processes CPULIMIT(value/1024) Specifies the number of jobs that can run at the same time in a CPU. The default value is 1024. The accepted values are integers from 0 to 1024. If you specify 0, no jobs are launched on the workstation. CPUTZ(timezone/UTC) Specifies the local time zone of the FTA. It must match the time zone of the operating system in which the FTA runs. For a complete list of valid time zones, refer to the appendix of the IBM Tivoli Workload Scheduler Reference Guide, SC32-1274. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 193
  • 210. If the time zone does not match that of the agent, the message AWSBHT128I is displayed in the log file of the FTA. The default is UTC (universal coordinated time). To avoid inconsistency between the local date and time of the jobs and of the Symphony creation, use the CPUTZ keyword to set the local time zone of the fault-tolerant workstation. If the Symphony creation date is later than the current local date of the FTW, Symphony is not processed. In the end-to-end environment, time zones are disabled by default when installing or upgrading Tivoli Workload Scheduler for z/OS. If the CPUTZ keyword is not specified, time zones are disabled. For additional information about how to set the time zone in an end-to-end network, see the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273. CPUUSER(default user/tws) Specifies the default user for the workstation. The maximum length is 47 characters. The default value is tws. The value of this option is used only if you have not defined the user in the JOBUSR option of the SCRPTLIB JOBREC statement or supply it with the Tivoli Workload Scheduler for z/OS job submit exit EQQUX001 for centralized script. SSLLEVEL(ON/OFF/ENABLED/FORCE) Must have one of the following values: ON The workstation uses SSL authentication when it connects with its domain manager. The domain manager uses the SSL authentication when it connects with a domain manager of a parent domain. However, it refuses any incoming connection from its domain manager if the connection does not use the SSL authentication. OFF (default) The workstation does not support SSL authentication for its connections. ENABLED The workstation uses SSL authentication only if another workstation requires it. FORCE The workstation uses SSL authentication for all of its connections. It refuses any incoming connection if it is not SSL. If this attribute is set to OFF or omitted, the workstation is not intended to be configured for SSL. In this case, any value for SSLPORT (see below) will be ignored. You should also set the nm ssl port local option to 0 (in the localopts file) to be sure that this port is not opened by netman. 194 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 211. SSLPORT(SSL port number|/31113) Defines the port used to listen for incoming SSL connections. This value must match the one defined in the nm SSL port local option (in the localopts file) of the workstation (the server with Tivoli Workload Scheduler installed). It must be different from the nm port local option (in the localopts file) that defines the port used for normal communications. If SSLLEVEL is specified but SSLPORT is missing, 31113 is used as the default value. If not even SSLLEVEL is specified, the default value of this parameter is 0 on FTWs, which indicates that no SSL authentication is required. FIREWALL(YES/NO) Specifies whether the communication between a workstation and its domain manager must cross a firewall. If you set the FIREWALL keyword for a workstation to YES, it means that a firewall exists between that particular workstation and its domain manager, and that the link between the domain manager and the workstation (which can be another domain manager itself) is the only link that is allowed between the respective domains. Also, for all workstations having this option set to YES, the commands to start (start workstation) or stop (stop workstation) the workstation or to get the standard list (showjobs) are transmitted through the domain hierarchy instead of opening a direct connection between the master (or domain manager) and the workstation. The default value for FIREWALL is NO, meaning that there is no firewall boundary between the workstation and its domain manager. To specify that an extended agent is behind a firewall, set the FIREWALL keyword for the host workstation. The host workstation is the Tivoli Workload Scheduler workstation with which the extended agent communicates and where its access method resides. USRREC statement This statement defines the passwords for the users who need to schedule jobs to run on Windows workstations. USRREC is defined in the member of the EQQPARM library as specified by the USERMEM keyword in the TOPOLOGY statement. (See Figure 4-6 on page 176 and Figure 4-15 on page 197.) Figure 4-14 illustrates the USRREC syntax. Figure 4-14 Syntax for the USRREC statement Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 195
  • 212. USRCPU(cpuname) The Tivoli Workload Scheduler workstation on which the user can launch jobs. It consists of four alphanumerical characters, starting with a letter. It is valid only on Windows workstations. USRNAM(logon ID) The user name of a Windows workstation. It can include a domain name and can consist of 47 characters. Windows user names are case-sensitive. The user must be able to log on to the computer on which Tivoli Workload Scheduler has launched jobs, and must also be authorized to log on as batch. If the user name is not unique in Windows, it is considered to be either a local user, a domain user, or a trusted domain user, in that order. USRPWD(password) The user password for the user of a Windows workstation (Figure 4-15 on page 197). It can consist of up to 31 characters and must be enclosed in single quotation marks. Do not specify this keyword if the user does not need a password. You can change the password every time you create a Symphony file (when creating a CP extension). Attention: The password is not encrypted. You must take the necessary action to protect the password from unauthorized access. One way to do this is to place the USRREC definitions in a separate member in a separate library. This library should then be protected with RACF so it can be accessed only by authorized persons. The library should be added in the EQQPARM data set concatenation in the end-to-end server started task and in the plan extend, replan, and Symphony renew batch jobs. Example JCL for plan replan, extend, and Symphony renew batch jobs: //EQQPARM DD DISP=SHR,DSN=TWS.V8R20.PARMLIB(BATCHOPT) // DD DISP=SHR,DSN=TWS.V8R20.PARMUSR In this example, the USRREC member is placed in the TWS.V8R20.PARMUSR library. This library can then be protected with RACF according to your standards. All other BATCHOPT initialization statements are placed in the usual parameter library. In the example, this library is named TWS.V8R20.PARMLIB and the member is BATCHOPT. 196 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 213. USRRECs in user member MASTERDM OPCMASTER EQQPARM(USERINFO) USRREC DomainA DomainB USRCPU(F202) USERNAM(tws) A000 B000 USRPSW(tivoli00) Symphony USRREC USRCPU(F202) A001 A002 B001 B002 USERNAM(Jim Smith) tws SouthMUser1 USRPSW(ibm9876) Jim Smith USRREC USRCPU(F302) OPC doesn’t have a built- USERNAM(SouthMUser1) in way to store Windows USRPSW(d9fj4k) ... users and passwords; for this reason, the users are defined by adding USRRECs to the user member of EQQPARM. USRRECs are used to add Windows NT user definitions to the Symphony file. Figure 4-15 Example of three USRREC definitions: for a local and domain Windows user 4.2.8 Example of DOMREC and CPUREC definitions We have explained how to use DOMREC and CPUREC statements to define the network topology for a Tivoli Workload Scheduler network in a Tivoli Workload Scheduler for z/OS end-to-end environment. We now use these statements to define a simple Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS. As an example, Figure 4-16 on page 198 illustrates a simple Tivoli Workload Scheduler network. In this network there is one domain, DOMAIN1, under the master domain (MASTERDM). Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 197
  • 214. M ASTERDM z /O S M a s te r D o m a in M anager O PCMASTER D O M A IN 1 D o m a in A IX M anager c o p e n h a g e n .d k .ib m .c o m F100 F101 B D M for F102 D om ain A A IX W in do w s lon don .u k .ibm .c om s to c k h o lm .se .ib m .c o m Figure 4-16 Simple end-to-end scheduling environment Example 4-3 describes the DOMAIN1 domain with the DOMAIN topology statement. Example 4-3 Domain definition DOMREC DOMAIN(DOMAIN1) /* Name of the domain is DOMAIN1 */ DOMMMNGR(F100) /* F100 workst. is domain mng. */ DOMPARENT(MASTERDM) /* Domain parent is MASTERDM */ In end-to-end, the master domain (MASTERDM) is always the Tivoli Workload Scheduler for z/OS controller. (It is predefined and cannot be changed.) Since the DOMAIN1 domain is under the MASTERDM domain, MASTERDM must be defined in the DOMPARENT parameter. The DOM;MNGR keyword represents the name of the workstation. There are three workstations (CPUs) in the DOMAIN1 domain. To define these workstations in the Tivoli Workload Scheduler for z/OS end-to-end network, we must define three CPURECs, one for each workstation (server) in the network. Example 4-4 Workstation (CPUREC) definitions for the three FTWs CPUREC CPUNAME(F100) /* Domain manager for DM100 */ CPUOS(AIX) /* AIX operating system */ 198 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 215. CPUNODE(copenhagen.dk.ibm.com) /* IP address of CPU (DNS) */ CPUTCPIP(31281) /* TCP port number of NETMAN */ CPUDOMAIN(DM100) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* This is a FTA CPU type */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for DM */ CPURESDEP(ON) /* Resolve dependencies on for DM*/ CPULIMIT(20) /* Number of jobs in parallel */ CPUTZ(Europe/Copenhagen) /* Time zone for this CPU */ CPUUSER(twstest) /* default user for CPU */ SSLLEVEL(OFF) /* SSL is not active */ SSLPORT(31113) /* Default SSL port */ FIREWALL(NO) /* WS not behind firewall */ CPUREC CPUNAME(F101) /* fault tolerant agent in DM100 */ CPUOS(AIX) /* AIX operating system */ CPUNODE(london.uk.ibm.com) /* IP address of CPU (DNS) */ CPUTCPIP(31281) /* TCP port number of NETMAN */ CPUDOMAIN(DM100) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* This is a FTA CPU type */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for BDM */ CPURESDEP(ON) /* Resolve dependencies on BDM */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(A) /* Start extra mailman process */ CPUTZ(Europe/London) /* Time zone for this CPU */ CPUUSER(maestro) /* default user for ws */ SSLLEVEL(OFF) /* SSL is not active */ SSLPORT(31113) /* Default SSL port */ FIREWALL(NO) /* WS not behind firewall */ CPUREC CPUNAME(F102) /* fault tolerant agent in DM100 */ CPUOS(WNT) /* Windows operating system */ CPUNODE(stockholm.se.ibm.com) /* IP address for CPU (DNS) */ CPUTCPIP(31281) /* TCP port number of NETMAN */ CPUDOMAIN(DM100) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* This is a FTA CPU type */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dependencies off FTA */ CPULIMIT(10) /* Number of jobs in parallel */ CPUSERVER(A) /* Start extra mailman process */ CPUTZ(Europe/Stockholm) /* Time zone for this CPU */ CPUUSER(twstest) /* default user for ws */ SSLLEVEL(OFF) /* SSL is not active */ SSLPORT(31113) /* Default SSL port */ FIREWALL(NO) /* WS not behind firewall */ Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 199
  • 216. Because F101 is going to be backup domain manager for F100, F101 is defined with CPUFULLSTATUS (ON) and CPURESDEP(ON). F102 is a fault-tolerant agent without extra responsibilities, so it is defined with CPUFULLSTATUS(OFF) and CPURESDEP(OFF) because dependency resolution within the domain is the task of the domain manager. This improves performance by reducing network traffic. Note: CPUOS(WNT) applies for all Windows platforms. Finally, since F102 runs on a Windows server, we must create at least one USRREC definition for this server. In our example, we would like to be able to run jobs on the Windows server under either the Tivoli Workload Scheduler installation user (twstest) or the database user, databusr. Example 4-5 USRREC definition for tws F102 Windows users, twstest and databusr USRREC USRCPU(F102) /* Definition for F102 Windows CPU */ USRNAM(twstest) /* The user name (local user) */ USRPSW('twspw01') /* The password for twstest */ USRREC USRCPU(F102) /* Definition for F102 Windows CPU */ USRNAM(databusr) /* The user name (local user) */ USRPSW('data01ad') /* Password for databusr */ 4.2.9 The JTOPTS TWSJOBNAME() parameter With the JTOPTS TWSJOBNAME() parameter, it is possible to specify different criteria that Tivoli Workload Scheduler for z/OS should use when creating the job name in the Symphony file in USS. The syntax for the JTOPTS TWSJOBNAME() parameter is: TWSJOBNAME(EXTNAME/EXTNOCC/JOBNAME/OCCNAME) If you do not specify the TWSJOBNAME() parameter, the value OCCNAME is used by default. When choosing OCCNAME, the job names in the Symphony file will be generated with one of the following formats: <X>_<Num>_<Application Name> when the job is created in the Symphony file <X>_<Num>_<Ext>_<Application Name> when the job is first deleted and then recreated in the current plan In these examples, <X> can be J for normal jobs (operations), P for jobs representing pending predecessors, and R for recovery jobs. 200 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 217. <Num> is the operation number. <Ext> is a sequential decimal number that is increased every time an operation is deleted and then recreated. <Application Name> is the name of the occurrence that the operation belongs to. See Figure 4-17 for an example of how the job names (and job stream names) are generated by default in the Symphony file when JTOPTS TWSJOBNAME(OCCNAME) is specified or defaulted. Note that occurrence in Tivoli Workload Scheduler for z/OS is the same as JSC job stream instance (that is, a job stream or and application that is on the plan in Tivoli Workload Scheduler for z/OS). CP OPC Current Plan Symphony File Symphony Job Stream Instance Input Arr. Occurence Token Job Stream Instance (Application Occurence) Time (Schedule) DAILY 0800 B8FF08015E683C44 B8FF08015E683C44 Operation Job Job Instance Number (Operation) J_010_DAILY 010 DLYJOB1 J_015_DAILY 015 DLYJOB2 J_020_DAILY 020 DLYJOB3 Job Stream Instance Input Arr. Occurence Token Job Stream Instance (Application Occurence) Time (Schedule) DAILY 0900 B8FFF05B29182108 B8FFF05B29182108 Operation Job Job Instance Number (Operation) J_010_DAILY 010 DLYJOB1 J_015_DAILY 015 DLYJOB2 J_020_DAILY 020 DLYJOB3 Each instance of a job stream in OPC is assigned a unique occurence token. If the job stream is added to the TWS Symphony file, the occurence token is used as the job stream name in the Symphony file. Figure 4-17 Generation of job and job stream names in the Symphony file Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 201
  • 218. If any of the other values (EXTNAME, EXTNOCC, or JOBNAME) is specified in the JTOPTS TWSJOBNAME() parameter, the job name in the Symphony file is created according to one of the following formats: <X><Num>_<JobInfo> when the job is created in the Symphony file <X><Num>_<Ext>_<JobInfo> when the job is first deleted and then recreated in the current plan In these examples: <X> can be J for normal jobs (operations), P for jobs representing pending predecessors, and R for recovery jobs. For jobs representing pending predecessors, the job name is in all cases generated by using the OCCNAME criterion. This is because, in the case of pending predecessors, the current plan does not contain the required information (excepting the name of the occurrence) to build the Symphony name according to the other criteria. <Num> is the operation number. <Ext> is the hexadecimal value of a sequential number that is increased every time an operation is deleted and then recreated. <JobInfo> depends on the chosen criterion: – For EXTNAME: <JobInfo> is filled with the first 32 characters of the extended job name associated with that job (if it exists) or with the eight-character job name (if the extended name does not exist). Note that the extended job name, in addition to being defined in the database, must also exist in the current plan. – For EXTNOCC: <JobInfo> is filled with the first 32 characters of the extended job name associated with that job (if it exists) or with the application name (if the extended name does not exist). Note that the extended job name, in addition to being defined in the database, must also exist in the current plan. – For JOBNAME: <JobInfo> is filled with the 8-character job name. The criterion that is used to generate a Tivoli Workload Scheduler job name will be maintained throughout the entire life of the job. Note: In order to choose the EXTNAME, EXTNOCC, or JOBNAME criterion, the EQQTWSOU data set must have a record length of 160 bytes. Before using any of the above keywords, you must migrate the EQQTWSOU data set if you have allocated the data set with a record length less than 160 bytes. Sample EQQMTWSO is available to migrate this data set from record length 120 to 160 bytes. 202 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 219. Limitations when using the EXTNAME and EXTNOCC criteria: The job name in the Symphony file can contain only alphanumeric characters, dashes, and underscores. All other characters that are accepted for the extended job name are converted into dashes. Note that a similar limitation applies with JOBNAME: When defining members of partitioned data sets (such as the script or the job libraries), national characters can be used, but they are converted into dashes in the Symphony file. The job name in the Symphony file must be in uppercase. All lowercase characters in the extended name are automatically converted to uppercase by Tivoli Workload Scheduler for z/OS. Note: Using the job name (or the extended name as part of the job name) in the Symphony file implies that it becomes a key for identifying the job. This also means that the extended name - job name is used as a key for addressing all events that are directed to the agents. For this reason, be aware of the following facts for the operations that are included in the Symphony file: Editing the extended name is inhibited for operations that are created when the TWSJOBNAME keyword was set to EXTNAME or EXTNOCC. Editing the job name is inhibited for operations created when the TWSJOBNAME keyword was set to EXTNAME or JOBNAME. 4.2.10 Verify end-to-end installation in Tivoli Workload Scheduler for z/OS When all installation tasks as described in the previous sections have been completed, and all initialization statements and data sets related to end-to-end scheduling have been defined in the Tivoli Workload Scheduler for z/OS controller, end-to-end server, and plan extend, replan, and Symphony renew batch jobs, it is time to do the first verification of the mainframe part. Note: This verification can be postponed until workstations for the fault-tolerant agents have been defined in Tivoli Workload Scheduler for z/OS and, optionally, Tivoli Workload Scheduler has been installed on the fault-tolerant agents (the Tivoli Workload Scheduler servers or agents). Verify the Tivoli Workload Scheduler for z/OS controller After the customization steps haven been completed, simply start the Tivoli Workload Scheduler controller. Check the controller message log (EQQMLOG) for any unexpected error or warning messages. All Tivoli Workload Scheduler z/OS messages are prefixed with EQQ. See the IBM Tivoli Workload Scheduler Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 203
  • 220. for z/OS Messages and Codes Version 8.2 (Maintenance Release April 2004), SC32-1267. Because we have activated the end-to-end feature in the controller initialization statements by specifying the OPCOPTS TPLGYSRV() parameter and we have asked the controller to start our end-to-end server by the SERVERS(TWSCE2E) parameter, we will see messages as shown in Example 4-6 in the Tivoli Workload Scheduler for z/OS controller message log (EQQMLOG). Example 4-6 IBM Tivoli Workload Scheduler for z/OS controller messages for end-to-end EQQZ005I OPC SUBTASK E2E ENABLER IS BEING STARTED EQQZ085I OPC SUBTASK E2E SENDER IS BEING STARTED EQQZ085I OPC SUBTASK E2E RECEIVER IS BEING STARTED EQQG001I SUBTASK E2E ENABLER HAS STARTED EQQG001I SUBTASK E2E SENDER HAS STARTED EQQG001I SUBTASK E2E RECEIVER HAS STARTED EQQW097I END-TO-END RECEIVER STARTED SYNCHRONIZATION WITH THE EVENT MANAGER EQQW097I 0 EVENTS IN EQQTWSIN WILL BE REPROCESSED EQQW098I END-TO-END RECEIVER FINISHED SYNCHRONIZATION WITH THE EVENT MANAGER EQQ3120E END-TO-END TRANSLATOR SERVER PROCESS IS NOT AVAILABLE EQQZ193I END-TO-END TRANSLATOR SERVER PROCESSS NOW IS AVAILABLE Note: If you do not see all of these messages in your controller message log, you probably have not applied all available service updates. See 3.4.2, “Service updates (PSP bucket, APARs, and PTFs)” on page 117. The messages in Example 4-6 are extracted from the Tivoli Workload Scheduler for z/OS controller message log. There will be several other messages between the messages shown in Example 4-6 if you look in your controller message log. If the Tivoli Workload Scheduler for z/OS controller is started with empty EQQTWSIN and EQQTWSOU data sets, messages shown in Example 4-7 will be issued in the controller message log (EQQMLOG). Example 4-7 Formatting messages when EQQTWSOU and EQQTWSIN are empty EQQW030I A DISK DATA SET WILL BE FORMATTED, DDNAME = EQQTWSOU EQQW030I A DISK DATA SET WILL BE FORMATTED, DDNAME = EQQTWSIN EQQW038I A DISK DATA SET HAS BEEN FORMATTED, DDNAME = EQQTWSOU EQQW038I A DISK DATA SET HAS BEEN FORMATTED, DDNAME = EQQTWSIN 204 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 221. Note: In the Tivoli Workload Scheduler for z/OS system messages, there will also be two IEC031I messages related to the formatting messages in Example 4-7. These messages can be ignored because they are related to the formatting of the EQQTWSIN and EQQTWSOU data sets. The IEC031I messages look like: IEC031I D37-04,IFG0554P,TWSC,TWSC,EQQTWSOU,........................ IEC031I D37-04,IFG0554P,TWSC,TWSC,EQQTWSIN,............................. The messages in Example 4-8 and Example 4-9 show that the controller is started with the end-to-end feature active and that it is ready to run jobs in the end-to-end environment. When the Tivoli Workload Scheduler for z/OS controller is stopped, the end-to-end related messages shown in Example 4-8 will be issued. Example 4-8 Controller messages for end-to-end when controller is stopped EQQG003I SUBTASK E2E RECEIVER HAS ENDED EQQG003I SUBTASK E2E SENDER HAS ENDED EQQZ034I OPC SUBTASK E2E SENDER HAS ENDED. EQQZ034I OPC SUBTASK E2E RECEIVER HAS ENDED. EQQZ034I OPC SUBTASK E2E ENABLER HAS ENDED. Verify the Tivoli Workload Scheduler for z/OS server After the customization steps haven been completed for the Tivoli Workload Scheduler end-to-end server started task, simply start the end-to-end server started task. Check the server message log (EQQMLOG) for any unexpected error or warning messages. All Tivoli Workload Scheduler z/OS messages are prefixed with EQQ. See the IBM Tivoli Workload Scheduler for z/OS Messages and Codes, Version 8.2 (Maintenance Release April 2004), SC32-1267. When the end-to-end server is started for the first time, check that the messages shown in Example 4-9 appear in the Tivoli Workload Scheduler for z/OS end-to-end server EQQMLOG. Example 4-9 End-to-end server messages first time the end-to-end server is started EQQPH00I SERVER TASK HAS STARTED EQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTED EQQZ024I Initializing wait parameters EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/translator" has been started, pid is 67371783 EQQPT01I Program "/usr/lpp/TWS/TWS810/bin/netman" has been started, pid is 67371919 EQQPT56W The /DD:EQQTWSIN queue has not been formatted yet Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 205
  • 222. EQQPT22I Input Translator thread stopped until new Symphony will be available The messages shown in Example 4-9 on page 205 is normal when the Tivoli Workload Scheduler for z/OS end-to-end server is started for the first time and there is no Symphony file created. Furthermore the end-to-end server message EQQPT56W is normally only issued for the EQQTWSIN data set, if the EQQTWSIN and EQQTWSOU data sets are both empty and there is no Symphony file created. If the Tivoli Workload Scheduler for z/OS controller and end-to-end server is started with an empty EQQTWSOU data set (for example reallocated with a new record length), message EQQPT56W will be issued for the EQQTWSOU data set: EQQPT56W The /DD:EQQTWSOU queue has not been formatted yet If a Symphony file has been created the end-to-end server messages log contains the messages in the following example. Example 4-10 End-to-end server messages when server is started with Symphony file EQQPH33I THE END-TO-END PROCESSES HAVE BEEN STARTED EQQZ024I Initializing wait parameters EQQPT01I Program "/usr/lpp/TWS/TWS820/bin/translator" has been started, pid is 33817341 EQQPT01I Program "/usr/lpp/TWS/TWS820/bin/netman" has been started, pid is 262958 EQQPT20I Input Translator waiting for Batchman and Mailman are started EQQPT21I Input Translator finished waiting for Batchman and Mailman The messages shown in Example 4-10 are the normal start-up messages for an Tivoli Workload Scheduler for z/OS end-to-end server with a Symphony file. When the end-to-end server is stopped the messages shown in Example 4-11 should be issued in the EQQMLOG. Example 4-11 End-to-end server messages when server is stopped EQQZ000I A STOP OPC COMMAND HAS BEEN RECEIVED EQQPT04I Starter has detected a stop command EQQPT40I Input Translator thread is shutting down EQQPT12I The Netman process (pid=262958) ended successfully EQQPT40I Output Translator thread is shutting down EQQPT53I Output Translator thread has terminated EQQPT53I Input Translator thread has terminated EQQPT40I Input Writer thread is shutting down EQQPT53I Input Writer thread has terminated EQQPT12I The Translator process (pid=33817341) ended successfully 206 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 223. EQQPT10I All Starter's sons ended EQQPH34I THE END-TO-END PROCESSES HAVE ENDED EQQPH01I SERVER TASK ENDED After successful completion of the verification, move on to the next step in the end-to-end installation. 4.3 Installing Tivoli Workload Scheduler in an end-to-end environment In this section, we describe how to install Tivoli Workload Scheduler in an end-to-end environment. Important: Maintenance releases of Tivoli Workload Scheduler are made available about every three months. We recommend that, before installing, check for the latest available update at: ftp://ftp.software.ibm.com The latest release (as we write this book) for IBM Tivoli Workload Scheduler is 8.2-TWS-FP04 and is available at: ftp://ftp.software.ibm.com/software/tivoli_support/patches/patches_8.2.0/8.2 .0-TWS-FP04/ Installing a Tivoli Workload Scheduler agent in an end-to-end environment is not very different from installing Tivoli Workload Scheduler when Tivoli Workload Scheduler for z/OS is not involved. Follow the installation instructions in the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273. The main differences to keep in mind are that in an end-to-end environment, the master domain manager is always the Tivoli Workload Scheduler for z/OS engine (known by the Tivoli Workload Scheduler workstation name OPCMASTER), and the local workstation name of the fault-tolerant workstation is limited to four characters. 4.3.1 Installing multiple instances of Tivoli Workload Scheduler on one machine As mentioned in Chapter 2, “End-to-end scheduling architecture” on page 25, there are often good reasons to install multiple instances of the Tivoli Workload Scheduler engine on the same machine. If you plan to do this, there are some Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 207
  • 224. important considerations that should be made. Careful planning before installation can save you a considerable amount of work later. The following items must be unique for each instance of the Tivoli Workload Scheduler engine that is installed on a computer: The Tivoli Workload Scheduler user name and ID associated with the instance The home directory of the Tivoli Workload Scheduler user The component group (only on tier-2 platforms: LinuxPPC, IRIX, Tru64 UNIX, Dynix, HP-UX 11i Itanium) The netman port number (set by the nm port option in the localopts file) First, the user name and ID must be unique. There are many different ways to these users. Choose user names that make sense to you. It may simplify things to create a group called IBM Tivoli Workload Scheduler and make all Tivoli Workload Scheduler users members of this group. This would enable you to add group access to files to grant access to all Tivoli Workload Scheduler users. When installing Tivoli Workload Scheduler on UNIX, the Tivoli Workload Scheduler user is specified by the -uname option of the UNIX customize script. It is important to specify the Tivoli Workload Scheduler user because otherwise the customize script will choose the default user name maestro. Obviously, if you plan to install multiple Tivoli Workload Scheduler engines on the same computer, they cannot both be installed as the user maestro. Second, the home directory must be unique. In order to keep two different Tivoli Workload Scheduler engines completely separate, each one must have its own home directory. Note: Previous versions of Tivoli Workload Scheduler installed files into a directory called unison in the parent directory of the Tivoli Workload Scheduler home directory. Tivoli Workload Scheduler 8.2 simplifies things by placing the unison directory inside the Tivoli Workload Scheduler home directory. The unison directory is a relic of the days when Unison Software’s Maestro program (the direct ancestor of IBM Tivoli Workload Scheduler) was one of several programs that all shared some common data. The unison directory was where the common data shared between Unison’s various products was stored. Important information is still stored in this directory, including the workstation database (cpudata) and the NT user database (userdata). The Tivoli Workload Scheduler Security file is no longer stored in the unison directory; it is now stored in the Tivoli Workload Scheduler home directory. 208 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 225. Figure 4-18 should give you an idea of how two Tivoli Workload Scheduler engines might be installed on the same computer. You can see that each engine has its own separate Tivoli Workload Scheduler directory. / tivoli tws TWS Engine A TWS Engine B tws-a tws-b … … network Security bin mozart network Security bin mozart … … cpudata userdata mastsked jobs cpudata userdata mastsked jobs Figure 4-18 Two separate Tivoli Workload Scheduler engines on one computer Example 4-12 shows the /etc/passwd entries that correspond to the two Tivoli Workload Scheduler users. Example 4-12 Excerpt from /etc/passwd: two different Tivoli Workload Scheduler users tws-a:!:31111:9207:TWS Engine A User:/tivoli/tws/tws-a:/usr/bin/ksh tws-b:!:31112:9207:TWS Engine B User:/tivoli/tws/tws-b:/usr/bin/ksh Note that each Tivoli Workload Scheduler user has a unique name, ID, and home directory. On tier-2 platforms only (Linux/PPC, IRIX, Tru64 UNIX, Dynix, HP-UX 11i/Itanium), Tivoli Workload Scheduler still uses the /usr/unison/components file to keep track of each installed Tivoli Workload Scheduler engine. Each Tivoli Workload Scheduler engine on a tier-2 platform computer must have a unique component group name. The component group is arbitrary; it is just a name that is used by Tivoli Workload Scheduler programs to keep each engine separate. The name of the component group is entirely up to you. It can be specified using the -group option of the UNIX customize script during installation on a tier-2 platform machine. It is important to specify a different component group name for each instance of the Tivoli Workload Scheduler engine installed on a computer. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 209
  • 226. Component groups are stored in the file /usr/unison/components. This file contains two lines for each component group. Example 4-13 shows the components file corresponding to the two Tivoli Workload Scheduler engines. Example 4-13 Sample /usr/unison/components file for tier-2 platforms netman 1.8.1 /tivoli/TWS/TWS-A/tws TWS-Engine-A maestro 8.1 /tivoli/TWS/TWS-A/tws TWS-Engine-A netman 1.8.1.1 /tivoli/TWS/TWS-B/tws TWS-Engine-B maestro 8.1 /tivoli/TWS/TWS-B/tws TWS-Engine-B The component groups are called TWS-Engine-A and TWS-Engine-B. For each component group, the version and path for netman and maestro (the Tivoli Workload Scheduler engine) are listed. In this context, maestro refers simply to the Tivoli Workload Scheduler home directory. Important: The /usr/unison/components file is used only on tier-2 platforms. On tier-1 platforms (such as AIX, Linux/x86, Solaris, HP-UX, and Windows XP), there is no longer a need to be concerned with component groups because the new ISMP installer automatically keeps track of each installed Tivoli Workload Scheduler engine. It does so by writing data about each engine to a file called /etc/TWS/TWS Registry.dat. Important: Do not edit or remove the /etc/TWS/TWS Registry.dat file because this could cause problems with uninstalling Tivoli Workload Scheduler or with installing fix packs. Do not remove this file unless you intend to remove all installed Tivoli Workload Scheduler 8.2 engines from the computer. Finally, because netman listens for incoming TCP link requests from other Tivoli Workload Scheduler agents, it is important that the netman program for each Tivoli Workload Scheduler engine listen to a unique port. This port is specified by the nm port option in the Tivoli Workload Scheduler localopts file. If you change this option, you must shut down netman and start it again to make the change take effect. In our test environment, we chose a netman port number and user ID that was the same for each Tivoli Workload Scheduler engine. This makes it easier to remember and simpler when troubleshooting. Table 4-4 on page 211 shows the names and numbers we used in our testing. 210 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 227. Table 4-4 If possible, choose user IDs and port numbers that are the same User name User ID Netman port tws-a 31111 31111 tws-b 31112 31112 4.3.2 Verify the Tivoli Workload Scheduler installation Start Tivoli Workload Scheduler and verify that it starts without any error messages. Note that if there are no active workstations in Tivoli Workload Scheduler for z/OS for the Tivoli Workload Scheduler agent, only the netman process will be started. But you can verify that the netman process is started and that it listens to the IP port number that you have decided to use in your end-to-end environment. 4.4 Define, activate, verify fault-tolerant workstations To be able to define jobs in Tivoli Workload Scheduler for z/OS to be scheduled on FTWs, the workstations must be defined in Tivoli Workload Scheduler for z/OS controller. The workstations that are defined via the CPUREC keyword should also be defined in the Tivoli Workload Scheduler for z/OS workstation database before they can be activated in the Tivoli Workload Scheduler for z/OS plan. The workstations are defined the same way as computer workstations in Tivoli Workload Scheduler for z/OS, except they need a special flag: fault tolerant. This flag is used to indicate in Tivoli Workload Scheduler for z/OS that these workstations should be treated as FTWs. When the FTWs have been defined in the Tivoli Workload Scheduler for z/OS workstation database, they can be activated in the Tivoli Workload Scheduler for z/OS plan by either running a plan replan or plan extend batch job. The process is as follows: 1. Create a CPUREC definition for the workstation as described in “CPUREC statement” on page 187. 2. Define the FTW in the Tivoli Workload Scheduler for z/OS workstation database. Remember to set it to fault tolerant. 3. Run Tivoli Workload Scheduler for z/OS plan replan or plan extend to activate the workstation definition in Tivoli Workload Scheduler for z/OS. 4. Verify that the FTW gets active and linked. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 211
  • 228. 5. Define jobs and job streams on the newly created and activated FTW as described in 4.5, “Creating fault-tolerant workstation job definitions and job streams” on page 217. Important: Please note that order of the operations in this process is important. 4.4.1 Define fault-tolerant workstation in Tivoli Workload Scheduler controller workstation database A fault-tolerant workstation can be defined either from Tivoli Workload Scheduler for z/OS legacy ISPF dialogs (use option 1.1 from main menu) or in the JSC. In the following steps, we show how to define an FTW from the JSC (see Figure 4-19 on page 213): 1. Open the Actions Lists, select New Workstation, then select the instance for the Tivoli Workload Scheduler for z/OS controller where the workstation should be defined (TWSC-zOS in our example). 2. The Properties - Workstation in Database window opens. 3. Select the Fault Tolerant check box and fill in the Name field (the four-character name of the FTW) and, optionally, the Description field. See Figure 4-19 on page 213. Note: It is a good standard to use the first part of the description field to list the DNS name or host name for the FTW. This makes it easier to remember which server or machine the four-character workstation name in Tivoli Workload Scheduler for z/OS relates to. You can add up to 32 alphanumeric characters in the description field. 4. Save the new workstation definition by clicking OK. 212 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 229. Note: When we used the JSC to create FTWs as described, we sometimes received this error: GJS0027E Cannot save the workstation xxxx. Reason: EQQW787E FOR FT WORKSTATIONS RESOURCES CANNOT BE USED AT PLANNING If you receive this error when creating the FTW from the JSC, then select the Resources tab (see Figure 4-19 on page 213) and un-check the Used for planning check box for Resource 1 and Resource 2. This must be done before selecting the Fault Tolerant check box on the General tab. Figure 4-19 Defining a fault-tolerant workstation from the JSC 4.4.2 Activate the fault-tolerant workstation definition Fault-tolerant workstation definitions can be activated in the Tivoli Workload Scheduler for z/OS plan either by running the replan or the extend plan programs in the Tivoli Workload Scheduler for z/OS controller. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 213
  • 230. When running the replan or extend program, Tivoli Workload Scheduler for z/OS creates (or recreates) the Symphony file and distributes it to the domain managers at the first level. These domain managers, in turn, distribute the Symphony file to their subordinate fault-tolerant agents and domain managers, and so on. If the Symphony file is successfully created and distributed, all defined FTWs should be linked and active. We run the replan program and verify that the Symphony file is created in the end-to-end server. We also verify that the FTWs become available and have linked status in the Tivoli Workload Scheduler for z/OS plan. 4.4.3 Verify that the fault-tolerant workstations are active and linked First, it should be verified that there is no warning or error message in the replan batch job (EQQMLOG). The message log should show that all topology statements (DOMREC, CPUREC, and USRREC) have been accepted without any errors or warnings. Verify messages in plan batch job For a successful creation of the Symphony file, the message log should show messages similar to those in Example 4-14. Example 4-14 Plan batch job EQQMLOG messages when Symphony file is created EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000 EQQZ013I NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER IS: 0000 EQQQ502I SPECIAL RESOURCE DATASPACE HAS BEEN CREATED. EQQQ502I 00000020 PAGES ARE USED FOR 00000100 SPECIAL RESOURCE RECORDS. EQQ3011I WORKSTATION F100 SET AS DOMAIN MANAGER FOR DOMAIN DM100 EQQ3011I WORKSTATION F200 SET AS DOMAIN MANAGER FOR DOMAIN DM200 EQQ3105I A NEW CURRENT PLAN (NCP) HAS BEEN CREATED EQQ3106I WAITING FOR SCP EQQ3107I SCP IS READY: START JOBS ADDITION TO SYMPHONY FILE EQQ4015I RECOVERY JOB OF F100DJ01 HAS NO JOBWS KEYWORD SPECIFIED, EQQ4015I THE WORKSTATION F100 OF JOB F100DJ01 IS USED EQQ3108I JOBS ADDITION TO SYMPHONY FILE COMPLETED EQQ3101I 0000019 JOBS ADDED TO THE SYMPHONY FILE FROM THE CURRENT PLAN EQQ3087I SYMNEW FILE HAS BEEN CREATED Verify messages in the end-to-end server message log In the Tivoli Workload Scheduler for z/OS end-to-end server message log, we see the messages shown in Example 4-15. These messages show that the Symphony file has been created by the plan replan batch jobs and that it was possible for the end-to-end server to switch to the new Symphony file. 214 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 231. Example 4-15 End-to-end server messages when Symphony file is created EQQPT30I Starting switching Symphony EQQPT12I The Mailman process (pid=Unknown) ended successfully EQQPT12I The Batchman process (pid=Unknown) ended successfully EQQPT22I Input Translator thread stopped until new Symphony will be available EQQPT31I Symphony successfully switched EQQPT20I Input Translator waiting for Batchman and Mailman are started EQQPT21I Input Translator finished waiting for Batchman and Mailman EQQPT23I Input Translator thread is running Verify messages in the controller message log The Tivoli Workload Scheduler for z/OS controller shows the messages in Example 4-16, which indicate that the Symphony file was created successfully and that the fault-tolerant workstations are active and linked. Example 4-16 Controller messages when Symphony file is created EQQN111I SYMNEW FILE HAS BEEN CREATED EQQW090I THE NEW SYMPHONY FILE HAS BEEN SUCCESSFULLY SWITCHED EQQWL10W WORK STATION F100, HAS BEEN SET TO LINKED STATUS EQQWL10W WORK STATION F100, HAS BEEN SET TO ACTIVE STATUS EQQWL10W WORK STATION F101, HAS BEEN SET TO LINKED STATUS EQQWL10W WORK STATION F102, HAS BEEN SET TO LINKED STATUS EQQWL10W WORK STATION F101, HAS BEEN SET TO ACTIVE STATUS EQQWL10W WORK STATION F102, HAS BEEN SET TO ACTIVE STATUS Verify that fault-tolerant workstations are active and linked After the replan job has completed and output messages have been displayed, the FTWs are checked using the JSC instance pointing Tivoli Workload Scheduler for z/OS controller (Figure 4-20). The Fault Tolerant column indicates that it is an FTW. The Linked column indicates whether the workstation is linked. The Status column indicates whether the mailman process is up and running on the FTW. Figure 4-20 Status of FTWs in the Tivoli Workload Scheduler for z/OS plan Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 215
  • 232. The F200 workstation is Not Available because we have not installed a Tivoli Workload Scheduler fault-tolerant workstation on this machine yet. We have prepared for a future installation of the F200 workstation by creating the related CPUREC definitions for F200 and defined the FTW (F200) in the Tivoli Workload Scheduler controller workstation database. Tip: If the workstation does not link as it should, the cause could be that the writer process has not initiated correctly or the run number for the Symphony file on the FTW is not the same as the run number on the master. Mark the unlinked workstations and right-click to open a pop-up menu where you can click Link to try to link the workstation. The run number for the Symphony file in the end-to-end server can be seen from legacy ISPF panels in option 6.6 from the main menu. Figure 4-21 shows the status of the same FTWs, as it is shown in the JSC, when looking at the Symphony file at domain manager F100. Note that is much more information is available for each FTW. For example, in Figure 4-21 we can see that jobman and writer are running and that we can run 20 jobs in parallel on the FTWs (the Limit column). Also note the information in the Run, CPU type, and Domain columns. The information shown in Figure 4-21 is read from the Symphony file and generated by the plan programs based on the specifications in CPUREC and DOMREC definitions. This is one of the reasons why we suggest activating support for JSC when running end-to-end scheduling with Tivoli Workload Scheduler for z/OS. Note the status of the OPCMASTER workstation is correct and also remember that the OPCMASTER workstation and the MASTERDM domain is predefined in Tivoli Workload Scheduler for z/OS and cannot be changed. Jobman is not running on OPCMASTER (in USS in the end-to-end server), because the end-to-end server is not supposed to run jobs in USS. So the information that jobman is not running on the OPCMASTER workstation is OK. Figure 4-21 Status of FTWs in the Symphony file on domain manager F100 216 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 233. 4.5 Creating fault-tolerant workstation job definitions and job streams When the FTWs are active and linked in Tivoli Workload Scheduler for z/OS, you can run jobs on these workstations. To submit work to the FTWs in Tivoli Workload Scheduler for z/OS, you should: 1. Define the script (the JCL or the task) that should be executed on the FTW, (that is, on the server). When defining scripts in Tivoli Workload Scheduler for z/OS, it is important to remember that the script can be placed central in the Tivoli Workload Scheduler for z/OS job library or non-centralized on the FTW (on the Tivoli Workload Scheduler server). Definitions of scripts are found in: – 4.5.1, “Centralized and non-centralized scripts” on page 217 – 4.5.2, “Definition of centralized scripts” on page 219, – 4.5.3, “Definition of non-centralized scripts” on page 221 – 4.5.4, “Combination of centralized script and VARSUB, JOBREC parameters” on page 232 2. Create a job stream (application) in Tivoli Workload Scheduler for z/OS and add the job (operation) defined in step 1. It is possible to add the job (operation) to an existing job stream and create dependencies between jobs on FTWs and jobs on mainframe. Definition of FTW jobs and job streams in Tivoli Workload Scheduler for z/OS is found in 4.5.5, “Definition of FTW jobs and job streams in the controller” on page 234. 4.5.1 Centralized and non-centralized scripts As described in “Tivoli Workload Scheduler for z/OS end-to-end database objects” on page 69, a job can use two kinds of scripts: centralized or non-centralized. A centralized script is a script that resides in controller job library (EQQJBLIB dd-card, also called JOBLIB) and that is downloaded to the FTW every time the job is submitted. Figure 4-22 on page 218 illustrates the relationship between the centralized script job definition and member name in the job library (JOBLIB). Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 217
  • 234. JOBLIB(AIXHOUSP) //*%OPC SCAN //* OPC Comment: This job ………….. //*%OPC RECOVER echo 'OPC occurence plan date is: rmstdlist -p 10 IBM Tivoli Workload Scheduler for z/OS job library (JOBLIB) Figure 4-22 Centralized script defined in controller job library (JOBLIB) A non-centralized script is a script that is defined in the SCRPTLIB and that resides on the FTW. Figure 4-23 on page 219 shows the relationship between the job definition and the member name in the script library (EQQSCLIB). 218 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 235. EQQSCLIB(AIXHOUSP) VARSUB TABLES(IBMGLOBAL) JOBREC JOBSCR('/tivoli/tws/scripts/rc_rc. JOBUSR(%DISTUID.) RCCONDSUC('((RC<16) AND (RC<>8)) RECOVERY OPTION(RERUN) MESSAGE('Reply OK to rerun job') JOBCMD('ls') JOBUSR(%DISTUID.) SUB IBM Tivoli Workload Scheduler for z/OS script library (EQQSCLIB) Figure 4-23 Non-centralized script defined in controller script library (EQQSCLIB) 4.5.2 Definition of centralized scripts Define the centralized script job (operation) in a Tivoli Workload Scheduler for z/OS job stream (application) with the centralized script option set to Y (Yes). See Figure 4-24 on page 220. Note: The default is N (No) for all operations in Tivoli Workload Scheduler for z/OS. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 219
  • 236. Centralized script Figure 4-24 Centralized script option set in ISPF panel or JSC window A centralized script is a script that resides in the Tivoli Workload Scheduler for z/OS JOBLIB and that is downloaded to the fault-tolerant agent every time the job is submitted. The centralized script is defined the same way as a normal job JCL in Tivoli Workload Scheduler for z/OS. Example 4-17 Centralized script for job AIXHOUSP defined in controller JOBLIB EDIT TWS.V8R20.JOBLIB(AIXHOUSP) - 01.02 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 //*%OPC SCAN 000002 //* OPC Comment: This job calls TWS rmstdlist script. 000003 //* OPC ======== - The rmstdlist script is called with -p flag and 000004 //* OPC with parameter 10. 000005 //* OPC - This means that the rmstdlist script will print 000006 //* OPC files in the stdlist directory older than 10 days. 000007 //* OPC - If rmstdlist ends with RC in the interval from 1 000008 //* OPC to 128, OPC will add recovery application 000009 //* OPC F100CENTRECAPPL. 220 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 237. 000010 //* OPC 000011 //*%OPC RECOVER JOBCODE=(1-128),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO) 000012 //* OPC 000013 echo 'OPC occurrence plan date is: &ODMY1.' 000014 rmstdlist -p 10 ****** **************************** Bottom of Data **************************** In the centralized script in Example 4-17 on page 220, we are running the rmstdlist program that is delivered with Tivoli Workload Scheduler. In the centralized script, we use Tivoli Workload Scheduler for z/OS Automatic Recovery as well as JCL variables. Rules when creating centralized scripts Follow these rules when creating the centralized scripts in the Tivoli Workload Scheduler for z/OS JOBLIB: Each line starts in column 1 and ends in column 80. A backslash () in column 80 can be used to continue script lines with more than 80 characters. Blanks at the end of a line are automatically removed. Lines that start with //* OPC, //*%OPC, or //*>OPC are used for comments, variable substitution directives, and automatic job recovery. These lines are automatically removed before the script is downloaded to the FTA. 4.5.3 Definition of non-centralized scripts Non-centralized scripts are defined in a special partitioned data set, EQQSCLIB, that is allocated in the Tivoli Workload Scheduler for z/OS controller started task procedure and used to store the job or task definitions for FTA jobs. The script (the JCL) resides on the fault-tolerant agent. Note: This is the default behavior in Tivoli Workload Scheduler for z/OS for fault-tolerant agent jobs. You must use the JOBREC statement in every SCRPTLIB member to specify the script or command to run. In the SCRPTLIB members, you can also specify the following statements: VARSUB to use the Tivoli Workload Scheduler for z/OS automatic substitution of variables when the Symphony file is created or when an operation on an FTW is added to the current plan dynamically. RECOVERY to use the Tivoli Workload Scheduler recovery. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 221
  • 238. Example 4-18 shows the syntax for the VARSUB, JOBREC, and RECOVERY statements. Example 4-18 Syntax for VARSUB, JOBREC, and RECOVERY statements VARSUB TABLES(GLOBAL|tab1,tab2,..|APPL) PREFIX(’char’) BACKPREF(’char’) VARFAIL(YES|NO) TRUNCATE(YES|NO) JOBREC JOBSCR|JOBCMD (’task’) JOBUSR (’username’) INTRACTV(YES|NO) RCCONDSUC(’success condition’) RECOVERY OPTION(STOP|CONTINUE|RERUN) MESSAGE(’message’) JOBCMD|JOBSCR(’task’) JOBUSR (’username’) JOBWS(’wsname’) INTRACTV(YES|NO) RCCONDSUC(’success condition’) If you define a job with a SCRPTLIB member in the Tivoli Workload Scheduler for z/OS database that contains errors, the daily planning batch job sets the status of that job to failed in the Symphony file. This change of status is not shown in the Tivoli Workload Scheduler for z/OS interface. You can find the messages that explain the error in the log of the daily planning batch job. If you dynamically add a job to the plan in Tivoli Workload Scheduler for z/OS whose associated SCRPTLIB member contains errors, the job is not added. You can find the messages that explain this failure in the controller EQQMLOG. Rules when creating JOBREC, VARSUB, or RECOVERY statements Each statement consists of a statement name, keywords, and keyword values, and follows TSO command syntax rules. When you specify SCRPTLIB statements, follow these rules: Statement data must be in columns 1 through 72. Information in columns 73 through 80 is ignored. A blank serves as the delimiter between two keywords; if you supply more than one delimiter, the extra delimiters are ignored. Continuation characters and blanks are not used to define a statement that continues on the next line. 222 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 239. Values for keywords are contained within parentheses. If a keyword can have multiple values, the list of values must be separated by valid delimiters. Delimiters are not allowed between a keyword and the left parenthesis of the specified value. Type /* to start a comment and */ to end a comment. A comment can span record images in the parameter member and can appear anywhere except in the middle of a keyword or a specified value. A statement continues until the next statement or until the end of records in the member. If the value of a keyword includes spaces, enclose the value within single or double quotation marks as in Example 4-19. Example 4-19 JOBCMD and JOBSCR examples JOBCMD(’ls la’) JOBSCR(‘C:/USERLIB/PROG/XME.EXE’) JOBSCR(“C:/USERLIB/PROG/XME.EXE”) JOBSCR(“C:/USERLIB/PROG/XME.EXE ‘THIS IS THE PARAMETER LIST’ “) JOBSCR(‘C:/USERLIB/PROG/XME.EXE “THIS IS THE PARAMETER LIST” ‘) Description of the VARSUB statement The VARSUB statement defines the variable substitution options. This statement must always be the first one in the members of the SCRPTLIB. For more information about the variable definition, see IBM Tivoli Workload Scheduler for z/OS Managing the Workload, Version 8.2 (Maintenance Release April 2004), SC32-1263. Note: Can be used in combination with a job that is defined with centralized script. Figure 4-25 shows the format of the VARSUB statement. Figure 4-25 Format of the VARSUB statement Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 223
  • 240. VARSUB is defined in the members of the EQQSCLIB library, as specified by the EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan extend, replan, and Symphony renew batch job JCL. Description of the VARSUB parameters The following describes the VARSUB parameters: TABLES(GLOBAL|APPL|table1,table2,...) Identifies the variable tables that must be searched and the search order. APPL indicates the application variable table (see the VARIABLE TABLE field in the MCP panel, at Occurrence level). GLOBAL indicates the table defined in the GTABLE keyword of the OPCOPTS controller and BATCHOPT batch options. PREFIX(char|&) A non-alphanumeric character that precedes a variable. It serves the same purpose as the ampersand (&) character that is used in variable substitution in z/OS JCL. BACKPREF(char|%) A non-alphanumeric character that delimits a variable to form simple and compound variables. It serves the same purpose as the percent (%) character that is used in variable substitution in z/OS JCL. VARFAIL(NO|YES) Specifies whether Tivoli Workload Scheduler for z/OS is to issue an error message when a variable substitution error occurs. If you specify NO, the variable string is left unchanged without any translation. TRUNCATE(YES|NO) Specifies whether variables are to be truncated if they are longer than the allowed length. If you specify NO and the keywords are longer than the allowed length, an error message is issued. The allowed length is the length of the keyword for which you use the variable. For example, if you specify a variable of five characters for the JOBWS keyword, the variable is truncated to the first four characters. Description of the JOBREC statement The JOBREC statement defines the fault-tolerant workstation job properties. You must specify JOBREC for each member of the SCRPTLIB. For each job this statement specifies the script or the command to run and the user that must run the script or command. 224 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 241. Note: JOBREC can be used in combination with a job that is defined with centralized script. Figure 4-26 shows the format of the JOBREC statement. Figure 4-26 Format of the JOBREC statement JOBREC is defined in the members of the EQQSCLIB library, as specified by the EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan extend, replan, and Symphony renew batch job JCL. Description of the JOBREC parameters The following describes the JOBREC parameters: JOBSCR(script name) Specifies the name of the shell script or executable file to run for the job. The maximum length is 4095 characters. If the script includes more than one word, it must be enclosed within single or double quotation marks. Do not specify this keyword if the job uses a centralized script. JOBCMD(command name) Specifies the name of the shell command to run the job. The maximum length is 4095 characters. If the command includes more than one word, it must be enclosed within single or double quotation marks. Do not specify this keyword if the job uses a centralized script. JOBUSR(user name) Specifies the name of the user submitting the specified script or command. The maximum length is 47 characters. If you do not specify the user in the JOBUSR keyword, the user defined in the CPUUSER keyword of the CPUREC statement is used. The CPUREC statement is the one related to the workstation on which the specified script or command must run. If the user is not specified in the CPUUSER keyword, the tws user is used. If the script is centralized, you can also use the job-submit exit (EQQUX001) to specify the user name. This user name overrides the value specified in the JOBUSR keyword. In turn, the value that is specified in the JOBUSR keyword Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 225
  • 242. overrides that specified in the CPUUSER keyword of the CPUREC statement. If no user name is specified, the tws user is used. If you use this keyword to specify the name of the user who submits the specified script or command on a Windows fault-tolerant workstation, you must associate this user name to the Windows workstation in the USRREC initialization statement. INTRACTV(YES|NO) Specifies that a Windows job runs interactively on the Windows desktop. This keyword is used only for jobs running on Windows fault-tolerant workstations. RCCONDSUC(“success condition”) An expression that determines the return code (RC) that is required to consider a job as successful. If you do not specify this keyword, the return code equal to zero corresponds to a successful condition. A return code different from zero corresponds to the job abend. The success condition maximum length is 256 characters and the total length of JOBCMD or JOBSCR plus the success condition must be 4086 characters. This is because the TWSRCMAP string is inserted between the success condition and the script or command name. For example, the dir command together with the success condition RC<4 is translated into: dir TWSRCMAP: RC<4 The success condition expression can contain a combination of comparison and Boolean expressions: – Comparison expression specifies the job return codes. The syntax is: (RC operator operand), where: • RC is the RC keyword (type RC). • operator is the comparison operator. It can have the values shown in Table 4-5. Table 4-5 Comparison operators Example Operator Description RC < a < Less than RC <= a <= Less than or equal to RC> a > Greater than RC >= a >= Greater than or equal to RC = a = Equal to 226 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 243. Example Operator Description RC <> a <> Not equal to • operand is an integer between -2147483647 and 2147483647. For example, you can define a successful job as a job that ends with a return code less than or equal to 3 as follows: RCCONDSUC “(RC <= 3)” – Boolean expression specifies a logical combination of comparison expressions. The syntax is: comparison_expression operator comparison_expression, where: • comparison_expression The expression is evaluated from left to right. You can use parentheses to assign a priority to the expression evaluation. • operator Logical operator. It can have the following values: and, or, not. For example, you can define a successful job as a job that ends with a return code less than or equal to 3 or with a return code not equal to 5, and less than 10 as follows: RCCONDSUC “(RC<=3) OR ((RC<>5) AND (RC<10))” Description of the RECOVERY statement Scheduler recovery for a job whose status is in error, but whose error code is not FAIL. To run the recovery, you can specify one or both of the following recovery actions: A recovery job (JOBCMD or JOBSCR keywords) A recovery prompt (MESSAGE keyword) The recovery actions must be followed by one of the recovery options (the OPTION keyword), stop, continue, or rerun. The default is stop with no recovery job and no recovery prompt. For more information about recovery in a distributed network, see Tivoli Workload Scheduler Reference Guide Version 8.2 (Maintenance Release April 2004),SC32-1274. The RECOVERY statement is ignored if it is used with a job that runs a centralized script. Figure 4-27 on page 228 shows the format of the RECOVERY statement. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 227
  • 244. Figure 4-27 Format of the RECOVERY statement RECOVERY is defined in the members of the EQQSCLIB library, as specified by the EQQSCLIB DD of the Tivoli Workload Scheduler for z/OS controller and the plan extend, replan, and Symphony renew batch job JCL. Description of the RECOVERY parameters The following describes the RECOVERY parameters: OPTION(STOP|CONTINUE|RERUN) Specifies the option that Tivoli Workload Scheduler for z/OS must use when a job abends. For every job, Tivoli Workload Scheduler for z/OS enables you to define a recovery option. You can specify one of the following values: – STOP: Do not continue with the next job. The current job remains in error. You cannot specify this option if you use the MESSAGE recovery action. – CONTINUE: Continue with the next job. The current job status changes to complete in the z/OS interface. – RERUN: Automatically rerun the job (once only). The job status changes to ready, and then to the status of the rerun. Before rerunning the job for a second time, an automatically generated recovery prompt is displayed. MESSAGE(“message’”) Specifies the text of a recovery prompt, enclosed in single or double quotation marks, to be displayed if the job abends. The text can contain up to 64 characters. If the text begins with a colon (:), the prompt is displayed, but no reply is required to continue processing. If the text begins with an exclamation mark (!), the prompt is not displayed but a reply is required to proceed. You cannot use the recovery prompt if you specify the recovery STOP option without using a recovery job. 228 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 245. JOBCMD(command name) Specifies the name of the shell command to run if the job abends. The maximum length is 4095 characters. If the command includes more than one word, it must be enclosed within single or double quotation marks. JOBSCR(script name) Specifies the name of the shell script or executable file to be run if the job abends. The maximum length is 4095 characters. If the script includes more than one word, it must be enclosed within single or double quotation marks. JOBUSR(user name) Specifies the name of the user submitting the recovery job action. The maximum length is 47 characters. If you do not specify this keyword, the user defined in the JOBUSR keyword of the JOBREC statement is used. Otherwise, the user defined in the CPUUSER keyword of the CPUREC statement is used. The CPUREC statement is the one related to the workstation on which the recovery job must run. If the user is not specified in the CPUUSER keyword, the tws user is used. If you use this keyword to specify the name of the user who runs the recovery on a Windows fault-tolerant workstation, you must associate this user name to the Windows workstation in the USRREC initialization statement JOBWS(workstation name) Specifies the name of the workstation on which the recovery job or command is submitted. The maximum length is 4 characters. The workstation must belong to the same domain as the workstation on which the main job runs. If you do not specify this keyword, the workstation name of the main job is used. INTRACTV(YES|NO) Specifies that the recovery job runs interactively on a Windows desktop. This keyword is used only for jobs running on Windows fault-tolerant workstations. RCCONDSUC(“success condition”) An expression that determines the return code (RC) that is required to consider a recovery job as successful. If you do not specify this keyword, the return code equal to zero corresponds to a successful condition. A return code different from zero corresponds to the job abend. The success condition maximum length is 256 characters and the total length of the JOBCMD or JOBSCR plus the success condition must be 4086 characters. This is because the TWSRCMAP string is inserted between the success condition and the script or command name. For example, the dir command together with the success condition RC<4 is translated into: dir TWSRCMAP: RC<4 Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 229
  • 246. The success condition expression can contain a combination of comparison and Boolean expressions: – Comparison expression Specifies the job return codes. The syntax is: (RC operator operand) where: • RC is the RC keyword (type RC). • operator is the comparison operator. It can have the values in Table 4-6. Table 4-6 Operator comparison operator values Example Operator Description RC < a < Less than RC <= a <= Less than or equal to RC> a > Greater than RC >= a >= Greater than or equal to RC = a = Equal to RC <> a <> Not equal to • operand is an integer between -2147483647 and 2147483647. For example, you can define a successful job as a job that ends with a return code less than or equal to 3 as follows: RCCONDSUC “(RC <= 3)” – Boolean expression: Specifies a logical combination of comparison expressions. The syntax is: comparison_expression operator comparison_expression where: • comparison_expression The expression is evaluated from left to right. You can use parentheses to assign a priority to the expression evaluation. • operator Logical operator (it could be either: and, or, not). For example, you can define a successful job as a job that ends with a return code less than or equal to 3 or with a return code not equal to 5, and less than 10 as follows: RCCONDSUC “(RC<=3) OR ((RC<>5) AND (RC<10))” 230 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 247. Example VARSUB, JOBREC, and RECOVERY For the test of VARSUB, JOBREC, and RECOVERY, we used the non-centralized script member as shown in Example 4-20. Example 4-20 Non-centralized AIX script with VARSUB, JOBREC, and RECOVERY EDIT TWS.V8R20.SCRPTLIB(F100DJ02) - 01.05 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 /* Definition for job with "non-centralized" script */ 000002 /* ------------------------------------------------ */ 000003 /* VARSUB - to manage JCL variable substitution */ 000004 VARSUB 000005 TABLES(E2EVAR) 000006 PREFIX('&') 000007 BACKPREF('%') 000008 VARFAIL(YES) 000009 TRUNCATE(YES) 000010 /* JOBREC - to define script, user and some other specifications */ 000011 JOBREC 000012 JOBCMD('rm &TWSHOME/demo.sh') 000013 JOBUSR ('%TWSUSER') 000014 /* RECOVERY - to define what FTA should do in case of error in job */ 000015 RECOVERY 000016 OPTION(RERUN) /* Rerun the job after recover*/ 000017 JOBCMD('touch &TWSHOME/demo.sh') /* Recover job */ 000018 JOBUSR('&TWSUSER') /* User for recover job */ 000019 MESSAGE ('Create demo.sh on FTA?') /* Prompt message */ ****** **************************** Bottom of Data **************************** The member F100DJ02 in Example 4-20 was created in the SCRPTLIB (EQQSCLIB) partitioned data set. In the non-centralized script F100DJ02, we use VARSUB to specify how we want Tivoli Workload Scheduler for z/OS to scan for JCL variables and substitute JCL variables. The JOBREC parameters specify that we will run the UNIX (AIX) rm command for a file named demo.sh. If the file does not exist (it does not exist the first time the script is run) we run the recovery command (touch) that will create the missing file. So we can rerun (OPTION(RERUN)) the JOBREC JOBCMD() without any errors. Before the job is rerun, an operator have to reply yes to the prompt message: Create demo.sh on FTA? Example 4-21 on page 232 shows another example. The job will be marked complete if return code from the script is less than 16 and different from 8 or equal to 20. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 231
  • 248. Example 4-21 Non-centralized script definition with RCCONDSUC parameter EDIT TWS.V8R20.SCRPTLIB(F100DJ03) - 01.01 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 /* Definition for job with "distributed" script */ 000002 /* -------------------------------------------- */ 000003 /* VARSUB - to manage JCL variable substitution */ 000004 VARSUB 000005 TABLES(IBMGLOBAL) 000006 PREFIX(%) 000007 VARFAIL(YES) 000008 TRUNCATE(NO) 000009 /* JOBREC - to define script, user and some other specifications */ 000010 JOBREC 000011 JOBSCR('/tivoli/tws/scripts/rc_rc.sh 12') 000012 JOBUSR(%DISTUID.) 000013 RCCONDSUC('((RC<16) AND (RC<>8)) OR (RC=20)') Important: Be careful with lowercase and uppercase. In Example 4-21, it is important that the variable name DISTUID is typed with capital letters because Tivoli Workload Scheduler for z/OS JCL variable names are always uppercase. On the other hand, it is important that the value for the DISTUID variable is defined in Tivoli Workload Scheduler for z/OS variable table IBMGLOBAL with lowercase letters, because the user ID is defined on the UNIX system with lowercase letters. Remember to type with CAPS OFF when editing members in SCRPTLIB (EQQSCLIB) for jobs with non-centralized script and members in Tivoli Workload Scheduler for z/OS JOBLIB (EQQJBLIB) for jobs with centralized script. 4.5.4 Combination of centralized script and VARSUB, JOBREC parameters Sometimes it can be necessary to create a member in the EQQSCLIB (normally used for non-centralized script definitions) for a job that is defined in Tivoli Workload Scheduler for z/OS with centralized script. 232 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 249. This can be the case if: The RCCONDSUC parameter will be used for the job to accept specific return codes or return code ranges. Note: You cannot use Tivoli Workload Scheduler for z/OS highest return code for fault-tolerant workstation jobs. You have to use the RCCONDSUC parameter. A special user should be assigned to the job with the JOBUSR parameter. Tivoli Workload Scheduler for z/OS JCL variables should be used in the JOBUSR() or the RCCONDSUC() parameters (for example). Remember that the RECOVERY statement cannot be specified in EQQSCLIB for jobs with centralized script. (It will be ignored.) To make this combination, you simply: 1. Create the centralized script in Tivoli Workload Scheduler for z/OS JOBLIB. The member name should be the same as the job name defined for the operation (job) in the Tivoli Workload Scheduler for z/OS job stream (application). 2. Create the corresponding member in the EQQSCLIB. The member name should be the same as the member name for the job in the JOBLIB. For example: We have a job with centralized script. In the job we should accept return codes less than 7 and the job should run with user dbprod. To accomplish this, we define the centralized script in Tivoli Workload Scheduler for z/OS the same way as shown in Example 4-17 on page 220. Next, we create a member in the EQQSCLIB with the same name as the member name used for the centralized script. This member should only contain the JOBREC RCCONDSUC() and JOBUSR() parameters (Example 4-22). Example 4-22 EQQSCLIB (SCRIPTLIB) definition for job with centralized script EDIT TWS.V8R20.SCRPTLIB(F100CJ02) - 01.05 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 JOBREC 000002 RCCONDSUC('RC<7') 000003 JOBUSR(dbprod) Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 233
  • 250. ****** **************************** Bottom of Data **************************** 4.5.5 Definition of FTW jobs and job streams in the controller When the script is defined either as centralized in the Tivoli Workload Scheduler for z/OS job library (JOBLIB) or as non-centralized in the Tivoli Workload Scheduler for z/OS script library (EQQSCLIB), you can define some job streams (applications) to run the defined scripts. Definition of job streams (applications) for fault-tolerant workstation jobs is done exactly the same way as normal mainframe job streams: The job is defined in the job stream, and dependencies are added (predecessor jobs, time dependencies, special resources). Optionally, a run cycle can be added to run the job stream fat a set time. When the job stream is defined, the fault-tolerant workstation jobs can be executed and the final verification test can be performed. Figure 4-28 shows an example of a job stream that is used to test the end-to-end scheduling environment. There are four distributed jobs (seen in the left window in the figure) and these jobs will run on workdays (seen in the right window). Figure 4-28 Example of a job stream used to test end-to-end scheduling 234 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 251. It is not necessary to create a run cycle for job streams to test the FTW jobs, as they can be added manually to the plan in Tivoli Workload Scheduler for z/OS. 4.6 Verification test of end-to-end scheduling At this point we have: Installed and configured the Tivoli Workload Scheduler for z/OS controller for end-to-end scheduling Installed and configured the Tivoli Workload Scheduler for z/OS end-to-end server Defined the network topology for the distributed Tivoli Workload Scheduler network in the end-to-end server and plan batch jobs Installed and configured Tivoli Workload Scheduler on the servers in the network for end-to-end scheduling Defined fault-tolerant workstations and activated these workstations in the Tivoli Workload Scheduler for z/OS network Verified that the plan program executed successfully with the end-to-end topology statements Created members with centralized script and non-centralized scripts Created job streams containing jobs with centralized and non-centralized scripts It is time to perform the final verification test of end-to-end scheduling. This test verifies that: Jobs with centralized script definitions can be executed on the FTWs, and the job log can be browsed for these jobs. Jobs with non-centralized script definitions can be executed on the FTWs, and the job log can be browsed for these jobs. Jobs with a combination of centralized and non-centralized script definitions can be executed on the FTWs, and the job log can be browsed for these jobs. The verification can be performed in several ways. Because we would like to verify that our end-to-end environment is working and that it is possible to run jobs on the FTWs, we have focused on this verification. We used the Job Scheduling Console in combination with legacy Tivoli Workload Scheduler for z/OS ISPF panels for the verifications. Of course, it is possible to perform the complete verification only with the legacy ISPF panels. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 235
  • 252. Finally, if you decide to use only centralized scripts or non-centralized scripts, you do not have to verify both cases. 4.6.1 Verification of job with centralized script definitions Add a job stream with a job defined with centralized script. The job from Example 4-17 on page 220 is used in this example. Before the job was submitted, the JCL (script) was edited and the parameter on the rmstdlist program was changed from 10 to 1 (Figure 4-29). Figure 4-29 Edit JCL for centralized script, rmstdlist parameter changed from 10 to 1 The job is submitted, and it is verified that the job completes successfully on the FTA. Output is verified by doing browse job log. Figure 4-30 on page 237 shows only the first part of the job log. See the complete job log in Example 4-23 on page 237. From the job log, you can see that the centralized script that was defined in the controller JOBLIB is copied to (see the line with the = JCLFILE text): /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_0 05_F100CENTHOUSEK.sh 236 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 253. The Tivoli Workload Scheduler for z/OS JCL variable &ODMY1 in the “echo” line (Figure 4-29) has been substituted by the Tivoli Workload Scheduler for z/OS controller with the job stream planning date (for our case, 210704, seen in Example 4-23 on page 237). Figure 4-30 Browse first part of job log for the centralized script job in JSC Example 4-23 The complete job log for the centralized script job =============================================================== = JOB : OPCMASTER#BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK = USER : twstest = JCLFILE : /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8CFD2B8A25EC41.J_0 05_F100CENTHOUSEK.sh = Job Number: 52754 = Wed 07/21/04 21:52:39 DFT =============================================================== TWS for UNIX/JOBMANRC 8.2 AWSBJA001I Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2003 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc /tivoli/tws/twstest/tws/cen tralized/OPCMASTER.BB8CFD2B8A25EC41.J_005_F100CENTHOUSEK.sh Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 237
  • 254. TWS for UNIX (AIX)/JOBINFO 8.2 (9.5) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user ''. Locale LANG set to "C" Now we are running the script /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8C FD2B8A25EC41.J_005_F100CENTHOUSEK.sh OPC occurrence plan date is: 210704 TWS for UNIX/RMSTDLIST 8.2 AWSBJA001I Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2003 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. AWSBIS324I Will list directories older than -1 /tivoli/tws/twstest/tws/stdlist/2004.07.13 /tivoli/tws/twstest/tws/stdlist/2004.07.14 /tivoli/tws/twstest/tws/stdlist/2004.07.15 /tivoli/tws/twstest/tws/stdlist/2004.07.16 /tivoli/tws/twstest/tws/stdlist/2004.07.18 /tivoli/tws/twstest/tws/stdlist/2004.07.19 /tivoli/tws/twstest/tws/stdlist/logs/20040713_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040713_TWSMERGE.log /tivoli/tws/twstest/tws/stdlist/logs/20040714_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040714_TWSMERGE.log /tivoli/tws/twstest/tws/stdlist/logs/20040715_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040715_TWSMERGE.log /tivoli/tws/twstest/tws/stdlist/logs/20040716_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040716_TWSMERGE.log /tivoli/tws/twstest/tws/stdlist/logs/20040718_NETMAN.log /tivoli/tws/twstest/tws/stdlist/logs/20040718_TWSMERGE.log =============================================================== = Exit Status : 0 = System Time (Seconds) : 1 Elapsed Time (Minutes) : 0 = User Time (Seconds) : 0 = Wed 07/21/04 21:52:40 DFT =============================================================== This completes the verification of centralized script. 238 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 255. 4.6.2 Verification of job with non-centralized scripts Add a job stream with a job defined with non-centralized script. Our example uses the non-centralized job script from Example 4-21 on page 232. The job is submitted, and it is verified that the job ends in error. (Remember that the JOBCMD will try to remove a non-existing file.) Reply to the prompt with Yes, and the recovery job is executed (Figure 4-31). 1 • The job ends in error with RC=0002. • Right-click the job to open a context menu (1). • In the context menu, select Recovery Info to open the Job Instance Recovery Information window. • The recovery message is shown and you can reply to the prompt by clicking the Reply to Prompt arrow. • Select Yes and click OK to run the recovery job and rerun the failed F100DJ02 job (if the recovery job ends successfully). Figure 4-31 Running F100DJ02 job with non-centralized script and RECOVERY options The same process can be performed in Tivoli Workload Scheduler for z/OS legacy ISPF panels. When the job ends in error, type RI (for Recovery Info) for the job in the Tivoli Workload Scheduler for z/OS Error list to get the panel shown in Figure 4-32 on page 240. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 239
  • 256. Figure 4-32 Recovery Info ISPF panel in Tivoli Workload Scheduler for z/OS To reply Yes to the prompt, type PY in the Option field. Then press Enter several times to see the result of the recovery job in the same panel. The Recovery job info fields will be updated with information for Recovery jobid, Duration, and so on (Figure 4-33). Figure 4-33 Recovery Info after the Recovery job has been executed. The recovery job has been executed successfully and the Recovery Option (Figure 4-32) was rerun, so the failing job (F100DJ02) will be rerun and will complete successfully. Finally, the job log is browsed for the completed F100DJ02 job (Example 4-24 on page 241). The job log shows that the user is twstest ( = USER) and that the twshome directory is /tivoli/tws/twstest/tws (part of the = JCLFILE line). 240 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 257. Example 4-24 The job log for the second run of F100DJ02 (after the RECOVERY job) =============================================================== = JOB : OPCMASTER#BB8D04BFE71A3901.J_010_F100DECSCRIPT01 = USER : twstest = JCLFILE : rm /tivoli/tws/twstest/tws/demo.sh = Job Number: 24100 = Wed 07/21/04 22:46:33 DFT =============================================================== TWS for UNIX/JOBMANRC 8.2 AWSBJA001I Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2003 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc rm TWS for UNIX (AIX)/JOBINFO 8.2 (9.5) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user ''. Locale LANG set to "C" Now we are running the script rm /tivoli/tws/twstest/tws/demo.sh =============================================================== = Exit Status : 0 = System Time (Seconds) : 0 Elapsed Time (Minutes) : 0 = User Time (Seconds) : 0 = Wed 07/21/04 22:46:33 DFT =============================================================== If you compare the job log output with the non-centralized script definition in Example 4-21 on page 232, you see that the user and the twshome directory were defined as Tivoli Workload Scheduler for z/OS JCL variables (&TWSHOME and %TWSUSER). These variables have been substituted with values from the Tivoli Workload Scheduler for z/OS variable table E2EVAR (specified in the VARSUB TABLES() parameter). This variable substitution is performed when the job definition is added to the Symphony file either during normal Tivoli Workload Scheduler for z/OS plan extension or replan or if user ad hoc adds the job stream to the plan in Tivoli Workload Scheduler for z/OS. This completes the test of non-centralized script. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 241
  • 258. 4.6.3 Verification of centralized script with JOBREC parameters We did a verification with a job with centralized script combined with a JOBREC statement in the script library (EQQSCLIB). The verification uses a job named F100CJ02 and centralized script, as shown in Example 4-25. The centralized script is defined in the Tivoli Workload Scheduler for z/OS JOBLIB. Example 4-25 Centralized script for test in combination with JOBREC EDIT TWS.V8R20.JOBLIB(F100CJ02) - 01.07 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 //*%OPC SCAN 000002 //* OPC Here is an OPC JCL Variable OYMD1: &OYMD1. 000003 //* OPC 000004 //*%OPC RECOVER JOBCODE=(12),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO) 000005 //* OPC 000006 echo 'Todays OPC date is: &OYMD1' 000007 echo 'Unix system date is: ' 000008 date 000009 echo 'OPC schedule time is: ' &CHHMMSSX 000010 exit 12 ****** **************************** Bottom of Data **************************** The JOBREC statement for the F100CJ02 job is defined in the Tivoli Workload Scheduler for z/OS scriptlib (EQQSCLIB); see Example 4-26. It is important that the member name for the job (F100CJ02 in our example) is the same in JOBLIB and SCRPTLIB. Example 4-26 JOBREC definition for the F100CJ02 job EDIT TWS.V8R20.SCRPTLIB(F100CJ02) - 01.07 Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 JOBREC 000002 RCCONDSUC('RC<7') 000003 JOBUSR(maestro) ****** **************************** Bottom of Data **************************** The first time the job is run, it abends with return code 12 (due to the exit 12 line in the centralized script). Example 4-27 on page 243 shows the job log. Note the “= JCLFILE” line. Here you can see TWSRCMAP: RC<7, which is added because we specified RCCONDSUC(‘RC<7’) in the JOBREC definition for the F100CJ02 job. 242 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 259. Example 4-27 Job log for the F100CJ02 job (ends with return code 12) =============================================================== = JOB : OPCMASTER#BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01 = USER : maestro = JCLFILE : /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_0 20_F100CENTSCRIPT01.sh TWSRCMAP: RC<7 = Job Number: 56624 = Wed 07/21/04 23:07:16 DFT =============================================================== TWS for UNIX/JOBMANRC 8.2 AWSBJA001I Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2003 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc /tivoli/tws/twstest/tws/cen tralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.sh TWS for UNIX (AIX)/JOBINFO 8.2 (9.5) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user ''. Locale LANG set to "C" Todays OPC date is: 040721 Unix system date is: Wed Jul 21 23:07:17 DFT 2004 OPC schedule time is: 23021516 =============================================================== = Exit Status : 12 = System Time (Seconds) : 0 Elapsed Time (Minutes) : 0 = User Time (Seconds) : 0 = Wed 07/21/04 23:07:17 DFT =============================================================== The job log also shows that the user is set to maestro (the = USER line). This is because we specified JOBUSR(maestro) in the JOBREC statement. Next, before the job is rerun, the JCL (the centralized script) is edited, and the last line is changed from exit 12 to exit 6. Example 4-28 on page 244 shows the edited JCL. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 243
  • 260. Example 4-28 The script (JCL) for the F100CJ02 job is edited exit changed to 6 ****** ***************************** Top of Data ****************************** 000001 //*>OPC SCAN 000002 //* OPC Here is an OPC JCL Variable OYMD1: 040721 000003 //* OPC 000004 //*>OPC RECOVER JOBCODE=(12),ADDAPPL=(F100CENTRECAPPL),RESTART=(NO) 000005 //* OPC MSG: 000006 //* OPC MSG: I *** R E C O V E R Y A C T I O N S T A K E N *** 000007 //* OPC 000008 echo 'Todays OPC date is: 040721' 000009 echo 000010 echo 'Unix system date is: ' 000011 date 000012 echo 000013 echo 'OPC schedule time is: ' 23021516 000014 echo 000015 exit 6 ****** **************************** Bottom of Data **************************** Note that the line with Tivoli Workload Scheduler for z/OS Automatic Recover has changed: The % sign has been replaced by the > sign. This means that Tivoli Workload Scheduler for z/OS has performed the recovery action by adding the F100CENTRECAPPL job stream (application). The result after the edit and rerun of the job is that the job completes successfully. (It is marked as completed with return code = 0 in Tivoli Workload Scheduler for z/OS). The RCCONDSUC() parameter in the scriptlib defintion for the F100CJ02 job sets the job to successful even though the exit code from the script was 6 (Example 4-29). Example 4-29 Job log for the F100CJ02 job with script exit code = 6 =============================================================== = JOB : OPCMASTER#BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01 = USER : maestro = JCLFILE : /tivoli/tws/twstest/tws/centralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_0 20_F100CENTSCRIPT01.sh TWSRCMAP: RC<7 = Job Number: 41410 = Wed 07/21/04 23:35:48 DFT =============================================================== TWS for UNIX/JOBMANRC 8.2 AWSBJA001I Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2003 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. 244 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 261. AWSBIS307I Starting /tivoli/tws/twstest/tws/jobmanrc /tivoli/tws/twstest/tws/cen tralized/OPCMASTER.BB8D0F9DEE6AE7C5.J_020_F100CENTSCRIPT01.sh TWS for UNIX (AIX)/JOBINFO 8.2 (9.5) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for user ''. Locale LANG set to "C" Todays OPC date is: 040721 Unix system date is: Wed Jul 21 23:35:49 DFT 2004 OPC schedule time is: 23021516 =============================================================== = Exit Status : 6 = System Time (Seconds) : 0 Elapsed Time (Minutes) : 0 = User Time (Seconds) : 0 = Wed 07/21/04 23:35:49 DFT =============================================================== This completes the verification of centralized script combined with JOBREC statements. 4.7 Activate support for the Tivoli Workload Scheduler Job Scheduling Console To activate support for use of the Tivoli Workload Scheduler Job Scheduling Console (JSC), perform the following steps: 1. Install and start a Tivoli Workload Scheduler for z/OS JSC server on mainframe. 2. Install Tivoli Management Framework 4.1 or 3.7.1. 3. Install Job Scheduling Services in Tivoli Management Framework. 4. To be able to work with Tivoli Workload Scheduler for z/OS (OPC) controllers from the JSC: a. Install the Tivoli Workload Scheduler for z/OS connector in Tivoli Management Framework. b. Create instances in Tivoli Management Framework that point to the Tivoli Workload Scheduler for z/OS controllers you want to access from the JSC. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 245
  • 262. 5. To be able to work with Tivoli Workload Scheduler domain managers or fault-tolerant agents from the JSC: a. Install the Tivoli Workload Scheduler connector in Tivoli Management Framework. Note that the Tivoli Management Framework server or managed node must be installed on the machine where the Tivoli Workload Scheduler instance is installed. b. Create instances in Tivoli Management Framework that point to the Tivoli Workload Scheduler domain managers or fault-tolerant agents that you would like to access from the JSC. 6. Install the JSC on the workstations where it should be used. The following sections describe installation steps in more detail. 4.7.1 Install and start Tivoli Workload Scheduler for z/OS JSC server To use Tivoli Workload Scheduler Job Scheduling Console for communication with Tivoli Workload Scheduler for z/OS, you must initialize the Tivoli Workload Scheduler for z/OS connector. The connector forms the bridge between the Tivoli Workload Scheduler Job Scheduling Console and the Tivoli Workload Scheduler for z/OS product. The JSC communicates with Tivoli Workload Scheduler for z/OS through the scheduler server using the TCP/IP protocol. The JSC needs the server to run as a started task in a separate address space. The Tivoli Workload Scheduler for z/OS server communicates with Tivoli Workload Scheduler for z/OS and passes the data and return codes back to the connector. The security model that is implemented for Tivoli Workload Scheduler Job Scheduling Console is similar to that already implemented by other Tivoli products that have been ported to z/OS (namely IBM Tivoli User Administration and IBM Tivoli Security Management). The Tivoli Framework security handles the initial user verification, but it is necessary to obtain a valid corresponding RACF user ID to be able to work with the security environment in z/OS. Even though it is possible to have one server started task handling end-to-end scheduling, JSC communication, and even APPC communication, we recommend having a server started task dedicated to JSC communication (SERVOPTS PROTOCOL(JSC)). This has the advantage that you do not have to stop the whole end-to-end server process if only the JSC communication has be restarted. We will install a server dedicated to JSC communication and call it the JSC server. 246 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 263. When JSC is used to access the Tivoli Workload Scheduler for z/OS controller through the JSC server, the JSC server uses the Tivoli Workload Scheduler for z/OS program interface (PIF) to interface with the controller. You can find an example of the started task procedure in installation member EQQSER in the sample library that is generated by the EQQJOBS installation aid. An example of the initialization statements can be found in the EQQSERP member in the sample library generated by the EQQJOBS installation aid. After the installation of the JSC server, you can get almost the same functionality from the JSC as you have with the legacy Tivoli Workload Scheduler for z/OS IPSF interface. Configure and start the JSC server and verify the start First, create the started task procedure for the JSC server. The EQQSER member in the sample library can be used. Take the following into consideration when customizing the EQQSER sample: Make sure that the C runtime library is concatenated in the server JCL (CEE.SCEERUN) in STEPLLIB, if it is not in the LINKLIST. If you have multiple TCP/IP stacks or if the name of the procedure that was used to start the TCPIP address space is different from TCPIP, introduce the SYSTCPD DD card pointing to a data set containing the TCPIPJOBNAME parameter. (See DD SYSTCPD in the TCP/IP manuals.) Customize the JSC server initialization parameters file. (See the EQQPARM DD statement in the server JCL.) The installation member EQQSERP already contains a template. For information about the JSC server parameters, refer to IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265. We used the JSC server initialization parameters shown in Example 4-30. Also see Figure 4-34 on page 249. Example 4-30 The JSC server initialization parameter /**********************************************************************/ /* SERVOPTS: run-time options for the TWSCJSC started task */ /**********************************************************************/ SERVOPTS SUBSYS(TWSC) /*--------------------------------------------------------------------*/ /* TCP/IP server is needed for JSC GUI usage. Protocol=JSC */ /*--------------------------------------------------------------------*/ PROTOCOL(JSC) /* This server is for JSC */ JSCHOSTNAME(TWSCJSC) /* DNS name for JSC */ USERMAP(USERS) /* RACF user / TMF adm. map */ PORTNUMBER(38888) /* Portnumber for JSC comm. */ CODEPAGE(IBM-037) /* Codep. EBCIDIC/ASCII tr. */ Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 247
  • 264. /*--------------------------------------------------------------------*/ /* CALENDAR parameter is mandatory for server when using TCP/IP */ /* server. */ /*--------------------------------------------------------------------*/ INIT ADOICHK(YES) /* ADOI Check ON */ CALENDAR(DEFAULT) /* Use DEFAULT calendar */ HIGHDATE(711231) /* Default HIGHDATE */ The SUBSYS(), PROTOCOL(JSC), CALENDAR(), and HIGHDATE() are mandatory for using the Tivoli Job Scheduling Console. Make sure that the port you try to use is not reserved by another application. If JSCHOSTNAME() is not specified, the default is to use the host name that is returned by the operating system. Note: We got an error when trying to use the JSCHOSTNAME with a host name instead of an IP address (EQQPH18E COMMUNICATION FAILED). This problem is fixed with APAR PQ83670. Remember that you always have to define OMVS segments for Tivoli Workload Scheduler for z/OS server started tasks userids. Optionally, the JSC server started task name can be defined in the Tivoli Workload Scheduler for z/OS controller OPCOPTS SERVERS() parameter to let the controller start and stop the JSC server task when the controller itself is started and stopped (Figure 4-34 on page 249). 248 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 265. Note: It is possible to run many TWSC servers, but only one server can be (CPE, LTPE, etc.) OPCOPTS the end-to-end server (also called the BATCHOPT TPLGYSRV(TWSCE2E) topology server). Specify this server ... using the TPLGYSRV controller SERVERS(TWSCJSC,TWSCE2E) option. The SERVERS option TPLGYPRM(TPLGPARM) ... specifies the servers that will be ... started when the controller starts. JSC Server End-to-end Server TWSCJSC TWSCE2E Topology Records SERVOPTS SERVOPTS SUBSYS(TWSC) SUBSYS(TWSC) EQQPARM(TPLGINFO) PROTOCOL(JSC) PROTOCOL(E2E) DOMREC ... CODEPAGE(IBM-037) TPLGYPRM(TPLGPARM) DOMREC ... JSCHOSTNAME(TWSCJSC) ... CPUREC ... PORTNUMBER(38888) CPUREC ... Topology Parameters USERMAP(USERS) CPUREC ... ... EQQPARM(TPLGPARM) CPUREC ... User Map TOPOLOGY ... BINDIR(/tws) EQQPARM(USERS) WRKDIR(/tws/wrkdir) User Records USER 'ROOT@M-REGION' HOSTNAME(TWSC.IBM.COM) EQQPARM(USRINFO) RACFUSER(TMF) PORTNUMBER(31182) USRREC ... RACFGROUP(TIVOLI) TPLGYMEM(TPLGINFO) USRREC ... ... USRMEM(USERINFO) USRREC ... TRCDAYS(30) ... LOGLINES(100) Figure 4-34 JSC Server that communicates with TWSC controller After the configuration and customization of the JSC server initialization statements and the JSC server started task procedure, we started the JSC server and saw the messages in Example 4-31 during start. Example 4-31 Messages in EQQMLOG for JSC server when started EQQZ005I OPC SUBTASK SERVER IS BEING STARTED EQQPH09I THE SERVER IS USING THE TCP/IP PROTOCOL EQQPH28I THE TCP/IP STACK IS AVAILABLE EQQPH37I SERVER CAN RECEIVE JSC REQUESTS EQQPH00I SERVER TASK HAS STARTED Controlling access to Tivoli Workload Scheduler for z/OS from the JSC The Tivoli Framework performs a security check, verifying the user ID and password, when a user tries to use the Job Scheduling Console. The Tivoli Framework associates each user ID and password with an administrator. Tivoli Workload Scheduler for z/OS resources are protected by RACF. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 249
  • 266. The JSC user should have to enter only a single user ID - password combination, not at both the Tivoli Framework level and then again at the Tivoli Workload Scheduler for z/OS level). The security model is based on having the Tivoli Framework security handle the initial user verification while obtaining a valid corresponding RACF user ID. This makes it possible for the user to work with the security environment in z/OS. The z/OS security is based on a table mapping the Tivoli Framework administrator to an RACF user ID. When a Tivoli Framework user tries to initiate an action on z/OS, the Tivoli administrator ID is used as a key to obtain the corresponding RACF user ID. The JSC server uses the RACF user ID to build the RACF environment to access Tivoli Workload Scheduler for z/OS services, so the Tivoli Administrator must relate, or map, to a corresponding RACF user ID. There are two ways of getting the RACF user ID: The first way is by using the RACF Tivoli-supplied predefined resource class, TMEADMIN. Consult the section about implementing security in Tivoli Workload Scheduler for z/OS in IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265, for the complete setup of the TMEADMIN RACF class. The other way is to use a new OPC Server Initialization Parameter to define a member in the file identified by the EQQPARM DD statement in the server startup job. This member contains all of the associations for a TME user with an RACF user ID. You should set the parameter USERMAP in the JSC server SERVOPTS Initialization Parameter to define the member name. Use of the USERMAP(USERS) We used the JSC server SERVOPTS USERMAP(USERS) parameter to define the mapping between Tivoli Framework Administrators and z/OS RACF users. USERMAP(USERS) means that the definitions (mappings) are defined in a member named USERS in the EQQPARM library. See Figure 4-35 on page 251. 250 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 267. OPCMASTER USER entries in the USERMAP(USERS) member RACF EQQPARM(USERS) USER 'ROOT@M-REGION' RACFUSER(TMF) RACFGROUP(TIVOLI) JSC Server USERMAP USER ‘MIKE@M-REGION' RACFUSER(MAL) RACFGROUP(TIVOLI) USER ‘FINN@M-REGION' RACFUSER(FBK) RACFGROUP(TIVOLI) A000 OPC USER ‘STEFAN@M-REGION' RACFUSER(SF) RACFGROUP(TIVOLI) Connector ... Tivoli Framework • When a JSC user connects to the computer Job Scheduling running the OPC connector, the user is Console Other DMs and FTAs identified as a local TMF administrator. • When the user attempts to view or modify the OPC databases or plan, the JSC Server task uses RACF to determine whether to authorize the action • If the USERMAP option is specified in the SERVOPTS of the JSC Server task, the JSC Server uses this map to associate TMF administrators with RACF users • It is also possible to activate the TMEADMIN RACF class and add the TMF administrator names directly in there • For auditing purposes, it is recommended that one TMF administrator be defined for each RACF user Figure 4-35 The relation between TMF administrators and RACF users via USERMAP For example, in the definitions in the USERS member in EQQPARM in Figure 4-35, TMF administrator MIKE@M-REGION is mapped to RACF user MAL (MAL is member of RACF group TIVOLI). If MIKE logs in to the TMF with name M-REGION and MIKE@M-REGION works with the Tivoli Workload Scheduler for z/OS controller from the JSC, he will have the access defined for RACF user MAL. In other words, the USER defintion maps TMF Administrator MIKE@M-REGION to RACF user MAL. Whatever MIKE@M-REGION is doing from the JSC in the controller will be performed with the RACF authorization defined for the MAL user. All logging in RACF will also be done for the MAL user. The TMF Administrator is defined in TMF with a certain authorization level. The TMF Administrator must have the USER role to be able to use the Tivoli Workload Scheduler for z/OS connector. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 251
  • 268. Notes: If you decide to use the USERMAP to map TMF administrators to RACF users, you should be aware that users with update access to the member with the mapping definitions (the USERS member in our example) can get access to the Tivoli Workload Scheduler for z/OS controller by editing the mapping definitions. To avoid any misuse, make sure that the member with the mapping definitions is protected according to your security standards. Or use the standard RACF TMEADMIN resource class in RACF to do the mapping. To be able to audit what different JSC users do in Tivoli Workload Scheduler for z/OS, we recommend that you establish a one-to-one relationship between the TMF Administrator and the corresponding RACF user. (That is, you should not allow multiple users to use the same TMF Administrator by adding several different logons to one TMF Administrator.) 4.7.2 Installing and configuring Tivoli Management Framework 4.1 As we have already discussed, for the new Job Scheduling Console interface to communicate with the scheduling engines, it requires that a few other components be installed. If you are still not sure how all of the pieces fit together, review 2.4, “Job Scheduling Console and related components” on page 89. When installing Tivoli Workload Scheduler 8.2 using the ISMP installer GUI, you are given the option to install the Tivoli Workload Scheduler connector. If you choose this option, the installer program automatically installs the following components: Tivoli Management Framework 4.1, configured as a TMR server Job Scheduling Services 1.2 Tivoli Workload Scheduler Connector 8.2 The Tivoli Workload Scheduler 8.2 installer GUI will also automatically create a Tivoli Workload Scheduler connector instance and a TMF administrator associated with your Tivoli Workload Scheduler user. Letting the installer do the work of installing and configuring these components is generally a very good idea because it saves the trouble of performing each of these steps individually. If you choose not to let the Tivoli Workload Scheduler 8.2 installer install and configure these components for you, you can install them later. The following instructions for getting a TMR server installed and up and running, and to get Job Scheduling Services and the connectors installed, are primarily intended for environments that do not already have a TMR server, or one in which a separate TMR server will be installed for IBM Tivoli Workload Scheduler. 252 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 269. In the last part of this section, we discuss in more detail the steps specific to end-to-end scheduling: creating connector instances and TMF administrators. The Tivoli Management Framework is easy to install. If you already have the Framework installed in your organization, it is not necessary to install the components specific to Tivoli Workload Scheduler (the JSS and connectors) on a node in your existing Tivoli Managed Region. You may prefer to install a stand-alone TMR server solely for the purpose of providing the connection between the IBM Tivoli Workload Scheduler suite and its interface, the JSC. If your existing TMR is busy with other operations, such as monitoring or software distribution, you might want to consider installing a separate stand-alone TMR server for Tivoli Workload Scheduler. If you decide to install the JSS and connectors on an existing TMR server or managed node, you can skip to “Install Job Scheduling Services” and “Installing the connectors” on page 254. 4.7.3 Alternate method using Tivoli Management Framework 3.7.1 If for some reason you need to use the older 3.7.1 version of TMF instead of the newer 4.1 version, you must first install TMF 3.7B and then upgrade it to 3.7.1. Note: If you are installing TMF 3.7B on AIX 5.1 or later, you will need an updated version of the TMF 3.7B CD because the original TMF 3.7B CD did not correctly recognize AIX 5 as a valid target platform. Order PTF U482278 to get this updated TMF 3.7B CD. Installing Tivoli Management Framework 3.7B The first step is to install Tivoli Management Framework Version 3.7B. For instructions, refer to the Tivoli Framework 3.7.1 Installation Guide, GC32-0395. Upgrade to Tivoli Management Framework 3.7.1 Version 3.7.1 of Tivoli Management Framework is required by Job Scheduling Services 8.1, so if you do not already have Version 3.7.1 of the Framework installed, you must upgrade to it. Install Job Scheduling Services Follow the instructions in the IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257. to install JSS. As we discussed in Chapter 2, “End-to-end scheduling architecture” on page 25, JSS is simply a library used by the Framework, and it is a prerequisite of the connectors. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 253
  • 270. The hardware and software prerequisites for the Job Scheduling Services are: Software IBM Tivoli Management Framework: Version 3.7.1 or later for Microsoft® Windows, AIX, HP-UX, Sun Solaris, and Linux. Hardware – CD-ROM drive for installation – Approximately 4 MB of free disk space Job Scheduling Services is supported on the following platforms: Microsoft Windows – Windows NT 4.0 with Service Pack 6 – Windows 2000 Server or Advanced Server with Service Pack 3 IBM AIX Version 4.3.3, 5.1, 5.2 HP-UX PA-RISC Version 11.0, 11i Sun Solaris Version 7, 8, 9 Linux Red Hat Version 7.2, 7.3 SuSE Linux Enterprise Server for x86 Version 8 SuSE Linux Enterprise Server for S/390® and zSeries (kernel 2.4, 31–bit) Version 7 (new with this version) Red Hat Linux for S/390 (31–bit) Version 7 (new with this version) Installing the connectors Follow the installation instructions in the IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257. When installing the Tivoli Workload Scheduler connector, we recommend that you do not select the Create Instance check box. Create the instances after the connector has been installed. The hardware and software prerequisites for the Tivoli Workload Scheduler for z/OS connector are: Software: – IBM Tivoli Management Framework: Version 3.7.1 or later – Tivoli Workload Scheduler for z/OS 8.1, or Tivoli OPC 2.1 or later – Tivoli Job Scheduling Services 1.3 – TCP/IP network communications 254 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 271. – A Tivoli Workload Scheduler for z/OS user account (required), which you can create beforehand or have the setup program create for you Hardware: – CD-ROM drive for installation. – Approximately 3 MB of free disk space for the installation. In addition, the Tivoli Workload Scheduler for z/OS connector produces log files and temporary files, which are placed on the local hard drive. Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS connector are supported on the following platforms: Microsoft Windows – Windows NT 4.0 with Service Pack 6 – Windows 2000 Server or Advanced Server with Service Pack 3 IBM AIX Version 4.3.3, 5.1, 5.2 HP-UX PA-RISC Version 11.0, 11i Sun Solaris Version 7, 8, 9 Linux Red Hat Version 7.2, 7.3 SuSE Linux Enterprise Server for x86 Version 8 SuSE Linux Enterprise Server for S/390 and zSeries (kernel 2.4, 31–bit) Version 7 (new with this version) Red Hat Linux for S/390 (31–bit) Version 7 (new with this version) For more information, see IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level 1.3, SC32-1258. 4.7.4 Creating connector instances As we discussed in Chapter 2, “End-to-end scheduling architecture” on page 25, the connectors tell the Framework how to communicate with the different types of scheduling engine. To control the workload of the entire end-to-end scheduling network from the Tivoli Workload Scheduler for z/OS controller, it is necessary to create a Tivoli Workload Scheduler for z/OS connector instance to connect to that controller. It may also be a good idea to create a Tivoli Workload Scheduler connector instance on a fault-tolerant agent or domain manager. Sometimes the status may get out of sync between an FTA or DM and the Tivoli Workload Scheduler for z/OS controller. When this happens, it is helpful to be able to connect directly to that agent and get the status directly from there. Retrieving job logs (standard Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 255
  • 272. lists) is also much faster through a direct connection to the FTA than through the Tivoli Workload Scheduler for z/OS controller. Creating a Tivoli Workload Scheduler for z/OS connector instance You have to create at least one Tivoli Workload Scheduler for z/OS connector instance for each z/OS controller that you want to access with the Tivoli Job Scheduling Console. This is done using the wopcconn command. In our test environment, we wanted to be able to connect to a Tivoli Workload Scheduler for z/OS controller running on a mainframe with the host name twscjsc. On the mainframe, the Tivoli Workload Scheduler for z/OS TCP/IP server listens on TCP port 5000. Yarmouth is the name of the TMR-managed node where we created the connector instance. We called the new connector instance twsc. Here is the command we used: wopcconn -create -h london -e TWSC -a twscjsc -p 5000 The result of this will be that when we use JSC to connect to Yarmouth, a new connector instance called TWSC appears in the Job Scheduling list on the left side of the window. We can access the Tivoli Workload Scheduler for z/OS scheduling engine by clicking that new entry in the list. It is also possible to run wopcconn in interactive mode. To do this, just run wopcconn with no arguments. Refer to Appendix A, “Connector reference” on page 343 for a detailed description of the wopcconn command. Creating a Tivoli Workload Scheduler connector instance Remember that a Tivoli Workload Scheduler connector instance must have local access to the Tivoli Workload Scheduler engine with which it is associated. This is done using the wtwsconn.sh command. In our test environment, we wanted to be able to use JSC to connect to a Tivoli Workload Scheduler engine on the host Yarmouth. Yarmouth has two Tivoli Workload Scheduler engines installed, so we had to be sure to specify that the path of the Tivoli Workload Scheduler engine we specify when creating the connector is the path to the right Tivoli Workload Scheduler engine. We called the new connector instance TWS-A to reflect that this connector instance would be associated with the TWS-A engine on this host (as opposed to the other Tivoli Workload Scheduler engine, TWS-B). Here is the command we used: wtwsconn.sh -create -h london -n London-A -t /tivoli/TWS/tws-a 256 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 273. The result is that when we use JSC to connect to Yarmouth, a new connector instance called TWS-A appears in the Job Scheduling list on the left side of the window. We can access the TWS-A scheduling engine by clicking that new entry in the list. Refer to Appendix A, “Connector reference” on page 343 for a detailed description of the wtwsconn.sh command. 4.7.5 Creating WTMF administrators for Tivoli Workload Scheduler When a user logs onto the Job Scheduling Console, the Tivoli Management Framework verifies that the user’s logon is listed in an existing TMF administrator. TMF administrators for Tivoli Workload Scheduler A Tivoli Management Framework administrator must be created for the Tivoli Workload Scheduler user. Additional TMF administrators can be created for other users who will access Tivoli Workload Scheduler using JSC. TMF administrators for Tivoli Workload Scheduler for z/OS The Tivoli Workload Scheduler for z/OS TCP/IP server associates the Tivoli administrator to an RACF user. If you want to be able to identify each user uniquely, one Tivoli Administrator should be defined for each RACF user. If operating system users corresponding to the RACF users do not already exist on the TMR server or on a managed node in the TMR, you must first create one OS user for each Tivoli administrator that will be defined. These users can be created on the TMR server of on any managed node in the TMR. After you have created those users, you can simply add those users’ logins to the TMF administrators that you create. Important: When creating users or setting their passwords, disable any option that requires the user to set a password at the first logon. If the operating system requires the user’s password to change at the first logon, the user will have to do this before he will be able to log on via the Job Scheduling Console. Creating TMF administrators If Tivoli Workload Scheduler 8.2 is installed using the graphical ISMP installer, you have the option of installing the Tivoli Workload Scheduler connector automatically during Tivoli Workload Scheduler installation. If you choose this option, the installer will create one TMF administrator automatically. We still recommend that you create one Tivoli Management Framework Administrator for each user who will use JSC. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 257
  • 274. Perform the following steps from the Tivoli desktop to create a new TMF administrator: 1. Double-click the Administrators icon and select Create → Administrator, as shown in Figure 4-36. Figure 4-36 Create Administrator 2. Enter the Tivoli Administrator name you want to create. 3. Click Set Logins to specify the login name (Figure 4-37 on page 259). This field is important because it is used to determine the UID with which many operations are performed and represents a UID at the operating system level. 258 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 275. Figure 4-37 Create Administrator 4. Type in the login name and press Enter. Click Set & Close (Figure 4-38). Figure 4-38 Set Login Names Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 259
  • 276. 5. Enter the name of the group. This field is used to determine the GID under which many operations are performed. Click Set & Close. The TMR roles you assign to the administrator depend on the actions the user will need to perform. Table 4-7 Authorization roles required for connector actions An Administrator with this role... Can perform these actions User Use the instance View instance settings Admin, senior, or super Use the instance View instance settings Create and remove instances Change instance settings Start and stop instances 6. Click the Set TMR Roles icon and add the desired role or roles (Figure 4-39). Figure 4-39 Set TMR roles 7. Click Set & Close to finish your input. This returns you to the Administrators desktop (Figure 4-40 on page 261). 260 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 277. Figure 4-40 Tivoli Administrator desktop 4.7.6 Installing the Job Scheduling Console Tivoli Workload Scheduler for z/OS is shipped with the latest version (Version 1.3) of the Job Scheduling Console. We recommend that you use this version because it contains the best functionality and stability. The JSC can be installed on the following platforms: Microsoft Windows – Windows NT 4.0 with Service Pack 6 – Windows 2000 Server, Professional and Advanced Server with Service Pack 3 – Windows XP Professional with Service Pack 1 – Windows 2000 Terminal Services IBM AIX Version 4.3.3, 5.1, 5.2 HP-UX PA-RISC 11.0, 11i Sun Solaris Version 7, 8, 9 Linux Red Hat Version 7.2, 7.3 SuSE Linux Enterprise Server for x86 Version 8 Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 261
  • 278. Hardware and software prerequisites The following are the hardware and software prerequisites for the Job Scheduling Console. For use with Tivoli Workload Scheduler for z/OS Software: – IBM Tivoli Workload Scheduler for z/OS connector 1.3 – IBM Tivoli Workload Scheduler for z/OS 8.1 or OPC 2.1 or later – Tivoli Job Scheduling Services 1.3 – TCP/IP network communication – Java Runtime Environment Version 1.3 Hardware: – CD-ROM drive for installation – 70 MB disk space for full installation, or 34 MB for customized (English base) installation plus approximately 4 MB for each additional language. For use with Tivoli Workload Scheduler Software: – IBM Tivoli Workload Scheduler connector 8.2 – IBM Tivoli Workload Scheduler 8.2 – Tivoli Job Scheduling Services 1.3 – TCP/IP network communication – Java Runtime Environment Version 1.3 Note: You must use the same versions of the scheduler and the connector. Hardware: – CD-ROM drive for installation – 70 MB disk space for full installation, or 34 MB for customized (English base) installation plus approximately 4 MB for each additional language Note that the Tivoli Workload Scheduler for z/OS connector can support any Operations Planning and Control V2 release level as well as Tivoli Workload Scheduler for z/OS 8.1. For the most recent software requirements, refer to IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level 1.3, SC32-1258. 262 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 279. The following steps describe how to install the Job Scheduling Console: 1. Insert the Tivoli Job Scheduling Console CD-ROM into the system CD-ROM drive or mount the CD-ROM from a drive on a remote system. For this example, the CD-ROM drive is drive F. 2. Perform the following steps to run the installation command: – On Windows: • From the Start menu, select Run to display the Run dialog. • In the Open field, enter F:Install – On AIX: • Type the following command: jre -nojit -cp install.zip install • If that does not work, try: jre -nojit -classpath [path to] classes.zip:install.zip install • If that does not work either, on sh-like shells, try: cd [to directory where install.zip is located] CLASSPATH=[path to] classes.zip:install.zip export CLASSPATH java -nojit install • Or, for csh-like shells, try: cd [to directory where install.zip is located] setenv CLASSPATH [path to] classes.zip:install.zip java -nojit install – On Sun Solaris: • Change to the directory where you downloaded install.zip before running the installer. • Enter sh install.bin. 3. The splash window is displayed. Follow the prompts to complete the installation. Refer to IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257 for more information about installation of JSC. Starting the Job Scheduling Console Use the following to start the JSC, depending on your platform: On Windows Depending on the shortcut location that you specified during installation, click the JS Console icon or select the corresponding item in the Start menu. Chapter 4. Installing IBM Tivoli Workload Scheduler 8.2 end-to-end scheduling 263
  • 280. On Windows 95 and Windows 98 You can also start the JSC from the command line. Type runcon from the binjava subdirectory of the installation path. On AIX Type ./AIXconsole.sh On Sun Solaris Type ./SUNconsole.sh A Tivoli Job Scheduling Console start-up window is displayed (Figure 4-41). Figure 4-41 JSC login window Enter the following information and click the OK button to proceed: User name The user name of the person who has permission to use the Tivoli Workload Scheduler for z/OS connector instances Password The password for the Tivoli Framework administrator Host Machine The name of the Tivoli-managed node that runs the Tivoli Workload Scheduler for z/OS connector 264 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 281. 5 Chapter 5. End-to-end implementation scenarios and examples In this chapter, we describe different scenarios and examples for Tivoli Workload Scheduler for z/OS end-to-end scheduling. We describe and show: “Description of our environment and systems” on page 266 “Creation of the Symphony file in detail” on page 273 “Migrating Tivoli OPC tracker agents to end-to-end scheduling” on page 274 “Conversion from Tivoli Workload Scheduler network to Tivoli Workload Scheduler for z/OS managed network” on page 288 “Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios” on page 303 “Backup and maintenance guidelines for FTAs” on page 318 “Security on fault-tolerant agents” on page 323 “End-to-end scheduling tips and tricks” on page 331 © Copyright IBM Corp. 2004 265
  • 282. 5.1 Description of our environment and systems In this section, we describe the systems and configuration we used for the end-to-end test scenarios when working on this redbook. Figure 5-1 shows the systems and configuration that are used for the end-to-end scenarios. All of the systems are connected using TCP/IP connections. MASTERDM z/OS Sysplex Standby Standby Engine Master Domain z/OS Engine Manager wtsc64 z/OS OPCMASTER 9.12.6.9 z/OS wtsc63 wtsc65 9.12.6.8 9.12.6.10 S SL UK Europe Nordic Domain AIX Domain Windows 2000 Domain AIX Manager london Manager geneva Manager stockholm U000 9.3.4.63 E000 9.3.4.185 N000 9.3.4.47 Firewall & Router L SSL SS FTA AIX FTA W2K FTA AIX FTA W2K FTA W2K FTA Linux U001 U002 E001 E002 N001 N002 SSL belfast edinburgh rome amsterdam oslo helsinki 9.3.4.64 9.3.4.188 9.3.4.122 9.3.4.187 10.2.3.184 10.2.3.190 FTA W2K UX01 UX02 N003 unixlocl unixlocl unixrsh unixrsh copenhagen Extended Agents 10.2.3.189 Linux AIX Remote Firewall reykjavik AIX box dublin & Router 9.3.4.129 10.2.3.2 Figure 5-1 Systems and configuration used in end-to-end scheduling test scenarios We defined the following started task procedure names in z/OS: TWST For the Tivoli Workload Scheduler for z/OS agent TWSC For the Tivoli Workload Scheduler for z/OS engine TWSCE2E For the end-to-end server TWSCJSC For the Job Scheduling Console server In the following sections we have listed the started task procedure for our end-to-end server and the different initialization statements defined for the end-to-end scheduling network in Figure 5-1. 266 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 283. Started task procedure for the end-to-end server (TWSCE2E) Example 5-1 shows the started task procedure for the Tivoli Workload Scheduler for z/OS end-to-end server, TWSCE2E. Example 5-1 Started task procedure for the end-to-end server TWSCE2E //TWSCE2E EXEC PGM=EQQSERVR,REGION=64M,TIME=1440 //* NOTE: 64M IS THE MINIMUM REGION SIZE FOR E2E (SEE PQ78043) //********************************************************************* //* THIS IS A STARTED TASK PROCEDURE FOR AN OPC SERVER DEDICATED //* FOR END-TO-END SCHEDULING. //********************************************************************* //STEPLIB DD DISP=SHR,DSN=EQQ.SEQQLMD0 //EQQMLIB DD DISP=SHR,DSN=EQQ.SEQQMSG0 //EQQMLOG DD SYSOUT=* //EQQPARM DD DISP=SHR,DSN=TWS.INST.PARM(TWSCE2E) //SYSMDUMP DD DISP=SHR,DSN=TWS.INST.SYSDUMPS //EQQDUMP DD DISP=SHR,DSN=TWS.INST.EQQDUMPS //EQQTWSIN DD DISP=SHR,DSN=TWS.INST.TWSC.TWSIN -> INPUT TO CONTROLLER //EQQTWSOU DD DISP=SHR,DSN=TWS.INST.TWSC.TWSOU -> OUTPUT FROM CONT. //EQQTWSCS DD DISP=SHR,DSN=TWS.INST.CS -> CENTRALIZED SCRIPTS The end-to-end server (TWSCE2E) initialization statements Example 5-2 defines the initialization statements for the end-to-end scheduling network shown in Figure 5-1 on page 266. Example 5-2 End-to-end server (TWSCE2E) initialization statements /*********************************************************************/ /* SERVOPTS: run-time options for end-to-end server */ /*********************************************************************/ SERVOPTS SUBSYS(TWSC) /*-------------------------------------------------------------------*/ /* TCP/IP server is needed for end-to-end usage. */ /*-------------------------------------------------------------------*/ PROTOCOL(E2E) /* This server is for E2E "only"*/ TPLGYPRM(TOPOLOGY) /* E2E topology definition mbr. */ /*-------------------------------------------------------------------*/ /* If you want to use Automatic Restart manager you must specify: */ /*-------------------------------------------------------------------*/ ARM(YES) /* Use ARM to restart if abend */ Chapter 5. End-to-end implementation scenarios and examples 267
  • 284. Example 5-3 shows the TOPOLOGY initialization statements. Example 5-3 TOPOLOGY initialization statements; member name is TOPOLOGY /**********************************************************************/ /* TOPOLOGY: End-to-End options */ /**********************************************************************/ TOPOLOGY TPLGYMEM(TPDOMAIN) /* Mbr. with domain+FTA descr.*/ USRMEM(TPUSER) /* Mbr. with Windows user+pw */ BINDIR('/usr/lpp/TWS/V8R2M0') /* The TWS for z/OS inst. dir */ WRKDIR('/tws/twsce2ew') /* The TWS for z/OS work dir */ LOGLINES(200) /* Lines sent by joblog retr. */ TRCDAYS(10) /* Days to keep stdlist files */ CODEPAGE(IBM-037) /* Codepage for translator */ TCPIPJOBNAME(TCPIP) /* Name of TCPIP started task */ ENABLELISTSECCHK(N) /* CHECK SEC FILE FOR LIST? */ PLANAUDITLEVEL(0) /* Audit level on DMs&FTAs */ GRANTLOGONASBATCH(Y) /* Automatically grant right? */ HOSTNAME(twsce2e.itso.ibm.com) /* DNS hostname for server */ PORTNUMBER(31111) /* Port for netman in USS */ Example 5-4 shows DOMREC and CPUREC initialization statements for the network in Figure 5-1 on page 266. Example 5-4 Domain and fault-tolerant agent definitions; member name is TPDOMAIN /**********************************************************************/ /* DOMREC: Defines the domains in the distributed Tivoli Workload */ /* Scheduler network */ /**********************************************************************/ /*--------------------------------------------------------------------*/ /* Specify one DOMREC for each domain in the distributed network. */ /* With the exception of the master domain (whose name is MASTERDM */ /* and consist of the TWS for z/OS controller). */ /*--------------------------------------------------------------------*/ DOMREC DOMAIN(UK) /* Domain name = UK */ DOMMNGR(U000) /* Domain manager= FLORENCE */ DOMPARENT(MASTERDM) /* Domain parent = MASTERDM */ DOMREC DOMAIN(Europe) /* Domain name = Europe */ DOMMNGR(E000) /* Domain manager= Geneva */ DOMPARENT(MASTERDM) /* Domain parent = MASTERDM */ DOMREC DOMAIN(Nordic) /* Domain name = Nordic */ DOMMNGR(N000) /* Domain manager= Stockholm */ DOMPARENT(MASTERDM) /* Domain parent = MASTERDM */ /**********************************************************************/ /**********************************************************************/ /* CPUREC: Defines the workstations in the distributed Tivoli */ /* Workload Scheduler network */ /**********************************************************************/ /*--------------------------------------------------------------------*/ 268 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 285. /* You must specify one CPUREC for workstation in the TWS network */ /* with the exception of OPC Controller which acts as Master Domain */ /* Manager */ /*--------------------------------------------------------------------*/ CPUREC CPUNAME(U000) /* DM of UK domain */ CPUOS(AIX) /* Windows operating system */ CPUNODE(london.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for DM */ CPURESDEP(ON) /* Resolve dependencies on for DM*/ CPULIMIT(20) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(maestro) /* Default user for jobs on CPU */ CPUREC CPUNAME(E000) /* DM of Europe domain */ CPUOS(WNT) /* Windows 2000 operating system */ CPUNODE(geneva.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Europe) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for DM */ CPURESDEP(ON) /* Resolve dependencies on for DM*/ CPULIMIT(20) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ CPUREC CPUNAME(N000) /* DM of Nordic domain */ CPUOS(AIX) /* AIX operating system */ CPUNODE(stockholm.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Nordic) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(ON) /* Full status on for DM */ CPURESDEP(ON) /* Resolve dependencies on for DM*/ CPULIMIT(20) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ CPUREC CPUNAME(U001) /* 1st FTA in UK domain */ CPUOS(AIX) /* AIX operating system */ CPUNODE(belfast.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ Chapter 5. End-to-end implementation scenarios and examples 269
  • 286. CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(1) /* Not allowed for DM/XAGENT CPU */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ CPUREC CPUNAME(U002) /* 2nd FTA in UK domain */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUOS(WNT) /* Windows 2000 operating system */ CPUNODE(edinburgh.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(2) /* Not allowed for DM/XAGENT CPU */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ CPUREC CPUNAME(E001) /* 1st FTA in Europe domain */ CPUOS(AIX) /* AIX operating system */ CPUNODE(rome.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Europe) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(1) /* Not allowed for domain mng. */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ CPUREC CPUNAME(E002) /* 2nd FTA in Europe domain */ CPUOS(WNT) /* Windows 2000 operating system */ CPUNODE(amsterdam.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Europe) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(2) /* Not allowed for domain mng. */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ CPUREC CPUNAME(N001) /* 1st FTA in Nordic domain */ CPUOS(WNT) /* Windows 2000 operating system */ CPUNODE(oslo.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Nordic) /* The TWS domain name for CPU */ 270 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 287. CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(1) /* Not allowed for domain mng. */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ SSLLEVEL(OFF) /* Use SSL? ON/OFF/ENABLED/FORCE */ SSLPORT(31382) /* Port for SSL communication */ FIREWALL(Y) /* Is CPU behind a firewall? */ CPUREC CPUNAME(N002) /* 2nd FTA in Nordic domain */ CPUOS(UNIX) /* Linux operating system */ CPUNODE(helsinki.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Nordic) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(2) /* Not allowed for domain mng. */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ SSLLEVEL(OFF) /* Use SSL? ON/OFF/ENABLED/FORCE */ SSLPORT(31382) /* Port for SSL communication */ FIREWALL(Y) /* Is CPU behind a firewall? */ CPUREC CPUNAME(N003) /* 3rd FTA in Nordic domain */ CPUOS(WNT) /* Windows 2000 operating system */ CPUNODE(copenhagen.itsc.austin.ibm.com) /* Hostname of CPU */ CPUTCPIP(31182) /* TCP port number of NETMAN */ CPUDOMAIN(Nordic) /* The TWS domain name for CPU */ CPUTYPE(FTA) /* CPU type: FTA/SAGENT/XAGENT */ CPUAUTOLNK(ON) /* Autolink is on for this CPU */ CPUFULLSTAT(OFF) /* Full status off for FTA */ CPURESDEP(OFF) /* Resolve dep. off for FTA */ CPULIMIT(20) /* Number of jobs in parallel */ CPUSERVER(3) /* Not allowed for domain mng. */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ SSLLEVEL(OFF) /* Use SSL? ON/OFF/ENABLED/FORCE */ SSLPORT(31382) /* Port for SSL communication */ FIREWALL(Y) /* Is CPU behind a firewall? */ CPUREC CPUNAME(UX01) /* X-agent in UK Domain */ CPUOS(OTHER) /* Extended agent */ CPUNODE(belfast.itsc.austin.ibm.com /* Hostname of CPU */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUHOST(U001) /* U001 is the host for x-agent */ CPUTYPE(XAGENT) /* This is an extended agent */ Chapter 5. End-to-end implementation scenarios and examples 271
  • 288. CPUACCESS(unixlocl) /* use unixlocl access method */ CPULIMIT(2) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ CPUREC CPUNAME(UX02) /* X-agent in UK Domain */ CPUOS(OTHER) /* Extended agent */ CPUNODE(belfast.itsc.austin.ibm.com /* Hostname of CPU */ CPUDOMAIN(UK) /* The TWS domain name for CPU */ CPUHOST(U001) /* U001 is the host for x-agent */ CPUTYPE(XAGENT) /* This is an extended agent */ CPUACCESS(unixrsh) /* use unixrsh access method */ CPULIMIT(2) /* Number of jobs in parallel */ CPUTZ(CST) /* Time zone for this CPU */ CPUUSER(tws) /* Default user for jobs on CPU */ User and password definitions for the Windows fault-tolerant workstations are defined as shown in Example 5-5. Example 5-5 User and password defintion for Windows FTAs; member name is TPUSER /*********************************************************************/ /* USRREC: Windows users password definitions */ /*********************************************************************/ /*-------------------------------------------------------------------*/ /* You must specify at least one USRREC for each Windows workstation */ /* in the distributed TWS network. */ /*-------------------------------------------------------------------*/ USRREC USRCPU(U002) USRNAM(tws) USRPSW('tws') USRREC USRCPU(E000) USRNAM(tws) USRPSW('tws') USRREC USRCPU(E000) USRNAM(tws) USRPSW('tws') USRREC USRCPU(N001) USRNAM(tws) USRPSW('tws') USRREC USRCPU(N003) USRNAM(tws) USRPSW('tws') 272 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 289. 5.2 Creation of the Symphony file in detail A new Symphony file is generated whenever any of these daily planning batch jobs is run: Extend the current plan. Replan the current plan. Renew the Symphony. Daily planning batch jobs must be able to read from and write to the HFS working directory (WRKDIR) because these jobs create the Symnew file in WRKDIR. For this reason, the group associated with WRKDIR must contain all of the users that will run daily planning batch jobs. The end-to-end server task starts the translator process in USS (via the starter process). The translator process inherits its ownership from the starting task, so it runs as the same user as the end-to-end server task. The translator process must be able to read from and write to the HFS working directory (WRKDIR). For this reason, WRKDIR must be owned by the user associated with the end-to-end server started task (E2ESERV in the following example). This underscores the importance of specifying the correct user and group in EQQPCS05. Figure 5-2 shows the steps of Symphony file creation: 1. The daily planning batch job copies the Symphony Current Plan VSAM data set to an HFS file in WRKDIR called SymUSER, where USER is the user name of the user who submitted the batch job. 2. The daily planning batch job renames SymUSER to Symnew. 3. The translator program running in UNIX System Services copies Symnew to Symphony and Sinfonia. Chapter 5. End-to-end implementation scenarios and examples 273
  • 290. z/OS End-to-end Daily Planning Server Task Batch Jobs EXTENDCP EXTENDCP Symphony Current Plan REPLANCP REPLANCP EQQSCPDS VSAM Data Set E2ESERV E2ESERV REFRESHCP REFRESHCP Run as E2ESERV SYMRENEW SYMRENEW Run as USER3, a member of TWSGRP USS 1 BINDIR WRKDIR USER3:TWSGRP SymUSER3 starter starter 2 USER3:TWSGRP Symnew translator translator 3 started as E2ESERV E2ESERV:TWSGRP Symphony Sinfonia Figure 5-2 Creation of the Symphony file in WRKDIR This illustrates how process ownership of the translator program is inherited from the end-to-end server task. The figure also shows how file ownership of Symnew and Symphony are inherited from the daily planning batch jobs and translator, respectively. 5.3 Migrating Tivoli OPC tracker agents to end-to-end scheduling In this section, we describe how to migrate from a Tivoli OPC tracker agent scheduling environment to a Tivoli Workload Scheduler for z/OS end-to-end scheduling environment with Tivoli Workload Scheduler fault-tolerant agents. We show the benefits of migrating to the fault-tolerant workstations with a step-by-step migration procedure. 5.3.1 Migration benefits If you plan to migrate to the end solution you can gain the following advantages: The use of fault-tolerant technology enables you to continue scheduling without a continuous connection to the z/OS engine. 274 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 291. Multi-tier architecture enables you to configure your distributed environment into logical and geographic needs through the domain topology. The monitoring of workload can be separated, based on dedicated distributed views. The multi-tier architecture also improves scalability and removes the limitation on the number of tracker agent workstations in Tivoli OPC. (In Tivoli OPC, the designated maximum number of tracer agents was 999, but the practical limit was around 500.) High availability configuration through: – The support of AIX High Availability Cluster Multi-Processing (HACMP™), HP Service Guard, and Windows clustering, for example. – Support for using host names instead of numeric IP addresses. – The ability to change workstation addresses as well as distributed network topology without recycling the Tivoli Workload Scheduler for z/OS controller. It only requires a plan replan. New supported platforms and operating systems, such as: – Windows 2000 and Windows XP – SuSE Linux Enterprise Server for zSeries Version 7 – Red Hat Linux (Intel®) Version 7.2, 7.3 – Other third-party access methods such as Tandem For a complete list of supported platforms and operating system levels, refer to IBM Tivoli Workload Scheduler Release Notes Version 8.2 (Maintenance Release April 2004), SC32-1277. Support for extended agents. Extended agents (XA) are used to extend the job scheduling functions of Tivoli Workload Scheduler to other systems and applications. An extended agent is defined as a workstation that has a host and an access method. Extended agents makes it possible to run jobs in the end-to-end scheduling solution on: – Oracle E-Business Suite – PeopleSoft – SAP R/3 For more information, refer to IBM Tivoli Workload Scheduler for Applications User’s Guide Version 8.2 (Maintenance Release April 2004), SC32-1278. Open extended agent interface, which enables you to write extended agents for non-supported platforms and applications. For example, you can write Chapter 5. End-to-end implementation scenarios and examples 275
  • 292. your own extended agent for Tivoli Storage Manager. For more information, refer to Implementing TWS Extended Agent for Tivoli Storage Manager, GC24-6030. User ID and password definitions for Windows fault-tolerant workstations are easier to implement and maintain. Does not require use of the Tivoli OPC Tracker agent impersonation support. IBM Tivoli Business Systems Manager support enables you to integrate the entire end-to-end environment. If you use alternate workstations for your tracker agents, be aware that this function is not available in fault-tolerant agents. As part of the fault-tolerant technology, a FTW cannot be an alternate workstation. You do not have to touch your planning-related definitions such as run cycles, periods, and calendars. 5.3.2 Migration planning Before starting the migration process, you may consider the following issues: The Job Migration Tool in Tivoli Workload Scheduler for z/OS 8.2 can be used to facilitate the migration from distributed tracker agents to Tivoli Workload Scheduler distributed agents. You may choose to not migrate your entire tracker agent environment at once. For better planning, we recommend first deciding which part of your tracker environment is more eligible to migrate. This enables you to smoothly migrate to the new fault-tolerant agents. The proper decision can be based on: – Agents belonging to a certain business unit – Agents running at a specific location or time zone – Agents having dependencies to Tivoli Workload Scheduler for z/OS job streams – Agents used for testing purposes The tracker agents topology is not based on any domain manager structure as used in the Tivoli Workload Scheduler end-to-end solution, so plan the topology configuration that suits your needs. The guidelines for helping you find your best configuration are detailed in 3.5.4, “Network planning and considerations” on page 141. Even though you can use centralized scripts to facilitate the migration from distributed tracker agents to Tivoli Workload Scheduler distributed agents, it may be necessary to make some modifications to the JCL (the script) used at 276 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 293. tracker agents when the centralized script for the corresponding fault-tolerant workstation is copied or moved. For example, this is the case for comments: – In JCL for tracker agent, a comment line can commence with //* //* This is a comment line – In centralized script, a comment line can commence with //* OPC //* OPC This is a comment line Tip: We recommend starting the migration with the less critical workload in the environment. The migration process needs some handling and experience; therefore you could start by migrating a test tracker agent with test scripts. If this is successful, you can continue with less critical production job streams and progress to the most important ones. If centralized script is used, the migration from tracker agents to fault-tolerant workstation should be a simple task. Basically, the migration is done simply by changing the workstation name from the name of a tracker agent workstation to the name of the new fault-tolerant workstation. This is even more true if you follow the migration checklist that is outlined in the following sections. Also note that with centralized script you can assign a user to a fault-tolerant workstation job exactly the same way as you did for tracker agents (for example by use of the job submit exit, EQQUX001). Important: Tivoli OPC tracker agent went out of support on October 31, 2003. 5.3.3 Migration checklist To guide you through the migration, Table 5-1 provides a step-by-step checklist. Table 5-1 Migration checklist Migration actions Page 1. Install IBM Tivoli Workload Scheduler “Installing IBM Tivoli Workload Scheduler end-to-end on z/OS mainframe. end-to-end solution” on page 278 2. Install fault-tolerant agents on each “Installing fault-tolerant agents” on tracker agent server or system that page 279 should be migrated to end-to-end. 3. Define the topology for the distributed “Define the network topology in the Tivoli Workload Scheduler network. end-to-end environment” on page 279 Chapter 5. End-to-end implementation scenarios and examples 277
  • 294. Migration actions Page 4. Decide if centralized, non-centralized, “Decide to use centralized or or a combination of centralized and non-centralized script” on page 281 non-centralized script should be used. 5. Define centralized script. “Define the centralized script” on page 284 6. Define non-centralized script. “Define the non-centralized script” on page 285 7. Define user ID and password for “Define the user and password for Windows fault-tolerant workstations. Windows FTWs” on page 285 8. Change the workstation name inside “Change the workstation name inside the the job streams from tracker agent job streams” on page 285 workstation name to fault-tolerant workstation name. 9. Consider doing some parallel testing “Parallel testing” on page 286 before the definitive shift from tracker agents to fault-tolerant agents. 10. Perform the cutover. “Perform the cutover” on page 287 11. Educate and train planners and “Education and training of operators and operators. planners” on page 287 5.3.4 Migration actions We now explain each step of the migration actions listed in Table 5-1 in detail. Installing IBM Tivoli Workload Scheduler end-to-end solution The Tivoli Workload Scheduler for z/OS end-to-end feature is required for the migration, and its installation and configuration are detailed in 4.2, “Installing Tivoli Workload Scheduler for z/OS end-to-end scheduling” on page 159. Important: It is important to start the installation of the end-to-end solution as early as possible in the migration process to gain as much experience as possible with this new environment before it should be handled in the production environment. End-to-end scheduling is not complicated, but job scheduling on the distributed systems works very differently in an end-to-end environment than in the tracker agent scheduling environment. 278 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 295. Installing fault-tolerant agents When you have decided which tracker agents to migrate, you can install the Tivoli Workload Scheduler code on the machines or servers that host the tracker agent. This enables you to migrate a mixed environment of tracker agents and fault-tolerant workstations in a more controlled way, because both environments (Tivoli Workload Scheduler and Tivoli OPC Tracker Agents) can coexist on the same physical machine. Both environments might coexist until you decide to perform the cutover. Cutover means switching to the fault-tolerant agent after the testing phase. Installation of the fault-tolerant agents is explained in detail in 4.3, “Installing Tivoli Workload Scheduler in an end-to-end environment” on page 207. Define the network topology in the end-to-end environment In Tivoli Workload Scheduler for z/OS, define the topology of the Tivoli Workload Scheduler network. The defintion process contains the following steps: 1. Designing the end-to-end network topology. 2. Definition of the network topology in Tivoli Workload Scheduler for z/OS with the DOMREC and CPUREC keywords. 3. Definition of the fault-tolerant workstations in the Tivoli Workload Scheduler for z/OS database. 4. Activation of the fault-tolerant workstation in the Tivoli Workload Scheduler for z/OS plan by a plan extend or plan replan batchjob. Tips: If you decide to define a topology with domain managers, you should also define backup domain managers. To better distinguish the fault-tolerant workstations, follow a consistent naming convention. After completion of the definition process, each workstation should be defined twice, once for the tracker agent and once for the distributed agent. This way, you can run a distributed agent and a tracker agent on the same computer or server. This should enable you to gradually migrate jobs from tracker agents to distributed agents. Example: From tracker agent network to end-to-end network In this example, we illustrate how an existing tracker agent network can be reflected in (or converted to) an end-to-end network topology. This example also Chapter 5. End-to-end implementation scenarios and examples 279
  • 296. shows the major differences between tracker agent network topology and the end-to-end network topology. In Figure 5-3, we have what we can call a classical tracker agent environment. This environment consists of multiple tracker agents on various operating platforms. All communication with all tracker agents are handled by a single subtask in the Tivoli Workload Scheduler for z/OS controller started task and does not use any kind of domain structure with multiple levels (tiers) to minimized the load on the controller. z/OS OPC Controller Tracker Tracker Agents Agents AIX AIX OS/400 AIX Solaris AIX Figure 5-3 A classic tracker agent environment Figure 5-3 shows what we can call a classical tracker agent environment. This environment consists of multiple tracker agents on various operating platforms and does not follow any domain topology Figure 5-4 shows how the tracker agent environment in Figure 5-3 on page 280 can be defined in an end-to-end scheduling environment by use of domain managers, back-up domain managers, and fault-tolerant agents. 280 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 297. MASTERDM z/OS Master Domain Controller & Manager End-to-end server OPCMASTER DomainA DomainB AIX AIX Domain Domain Manager Manager FDMA FDMB FTA1 FTA3 BDM for FTA2 BDM for FTA4 DomainA DomainB AIX OS/400 AIX Solaris Figure 5-4 End-to-end scheduling network with DMs and FTAs In the migration phase, it is possible for these two environments to co-exist. This means that on every machine, a tracker agent and a fault-tolerant workstation are installed. Decide to use centralized or non-centralized script When migrating from tracker agents to fault-tolerant agents, you have two options regarding scripts: you can use centralized or non-centralized scripts. If all of the tracker agent JCL (script) is placed in the Tivoli Workload Scheduler for z/OS controller job library, the easiest and most simple solution when migrating to end-to-end will be to use centralized scripts. But if all of the tracker agent JCL (script) is placed locally on the tracker agent systems, the easiest and most simple solution when migrating to end-to-end will be to use non-centralized scripts. Finally, if the tracker agent JCLs are both placed in the Tivoli Workload Scheduler for z/OS controller job library and locally on the tracker agent system, the easiest will be to migrate to end-to-end scheduling where a combination of centralized and non-centralized script is used. Chapter 5. End-to-end implementation scenarios and examples 281
  • 298. Use of the Job Migration Tool to help with the migration This tool can be used to help analyze the existing tracker agent environment to be able to decide whether the tracker agent JCL should be migrated using centralized script, non-centralized script, or a combination. To run the tool, select option 1.1.5 from the main menu in Tivoli Workload Scheduler for z/OS legacy ISPF. In the panel, enter the name for the tracker agent workstation that you would like to analyze and submit the job generated by Tivoli Workload Scheduler for z/OS. Note: Before submitting the job, modify it by adding all JOBLIBs for the tracker agent workstation that you are analyzing. Also remember to add JOBLIBs processed by the job-library-read exit (EQQUX002) if it is used. For a permanent change of the sample job, modify the sample migration job skeleton, EQQWMIGZ. The tool analyzes the operations (jobs) that are defined on the specified workstation and generates output in four data sets: 1. Report data set (default suffix: LIST) Contains warning messages for the processed jobs on the workstation specified as input to the tool. (See Example 5-6 on page 283.) There will be warning messages for: – Operations (jobs) that are associated with a job library member that uses JCL variables and directives and that have the centralized script option set to N (No). – Scripts (JCL) that do not have variables and are associated with operations that have the centralized script option set to Y (Yes). (This situation lowers performance.) – Operations (jobs) for which the tool did not find the JCL (member not found) in the JOBLIB libraries specified as input to the tool defined in Tivoli Workload Scheduler. Important: Check the tool report for warning messages. For jobs (operations) defined with centralized script option set to No (the default), the tool suggests defining the job on a workstation named DIST. For jobs (operations) defined with centralized script option set to Yes, the tool suggests defining the job on a workstation named CENT. The last part of the report contains a cross-reference that shows which application (job stream) the job (operation) is defined in. 282 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 299. The report is a good starting point for an overview of the migration effort. Note: The NT01JOB1 operation (job) is defined in two different applications (job streams): NT01HOUSEKEEPING and NT01TESTAPPL. The NT01JOB1 operation is defined with centralized script option set to Yes in the NT01TESTAPPL application and No in the NT01HOUSEKEEPING application. That is why the JT01JOB1 is defined on both the CENT and the DIST workstations. 2. JOBLIB data set (default suffix: JOBLIB) This library contains a copy of all detected jobs (members) for a specific workstation. The job is copied from the JOBLIB. In our example (Example 5-6), there are four jobs in this library: NT01AV01, NT01AV02, NT01JOB1, and NT01JOB2. 3. JOBCEN data set (default suffix: JOBCEN) This library contains a copy of all jobs (members) that have centralized scripts for a specific workstation that is defined with the centralized script option set to Yes. The job is copied from the JOBLIB. In our example, (Example 5-6), there are two jobs in this library: NT01JOB1 and NT01JOB2. These jobs were defined in Tivoli Workload Scheduler for z/OS with the centralized script option set to Yes. 4. JOBDIS data set (default suffix: JOBDIS). This library contains all jobs (members) that do not have centralized scripts for a specific workstation. These jobs must be transferred to the fault-tolerant workstation. In our example (Example 5-6), there are three jobs in this library: NT01AV01, NT01AV02, and NT01JOB1. These jobs were defined in Tivoli Workload Scheduler for z/OS with the centralized script option set to No (the default). Example 5-6 Report generated by the Job Migration Tool P R I N T O U T O F W O R K S T A T I O N D E S C R I P T I O N S = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = REPORT TYPE: CROSS-REFERENCE OF JOBNAMES AND ACTIVE APPLICATIONS ================================================================ JOBNAME APPL ID VALID TO OpTYPE_OpNUMBER -------- ---------------- -------- -------------------------------------------------------------- NT01AV01 NT01HOUSEKEEPING 31/12/71 DIST_005 NT01TESTAPPL2 31/12/71 DIST_005 NT01AV02 NT01TESTAPPL2 31/12/71 DIST_010 NT01AV03 NT01TESTAPPL2 31/12/71 DIST_015 WARNING: NT01AV03 member not found in job library NT01JOB1 NT01HOUSEKEEPING 31/12/71 DIST_010 Chapter 5. End-to-end implementation scenarios and examples 283
  • 300. NT01TESTAPPL 31/12/71 CENT_005 WARNING: Member NT01JOB1 contain directives (//*%OPC) or variables (& or % or ?). Modify the member manually or change the operation(s) type to centralized. NT01JOB2 NT01TESTAPPL 31/12/71 CENT_010 WARNING: You could change operation(s) to NON centralized type. APPL ID VALID TO JOBNAME OpTYPE_OpNUMBER ---------------- -------- --------- -------------------------------------------------------------- NT01HOUSEKEEPING 31/12/71 NT01AV01 DIST_005 NT01JOB1 DIST_010 NT01TESTAPPL 31/12/71 NT01JOB1 CENT_005 NT01JOB2 CENT_010 NT01TESTAPPL2 31/12/71 NT01AV01 DIST_005 NT01AV02 DIST_010 NT01AV03 DIST_015 >>>>>>> END OF APPLICATION DESCRIPTION PRINTOUT <<<<<<< Before you migrate the tracker agent to a distributed agent, you should use this tool to obtain these files for help in deciding whether the jobs should be defined with centralized or decentralized scripts. Define the centralized script If you decide to use centralized script for all or some of the tracker agent jobs, do the following: 1. Run the job migration tool for each tracker agent workstation and analyze the generated report. 2. Change the value of the centralized script flag to Yes, based on the result of the job migration tool output and your decision. 3. Run the job migration tool as many times as you want. For example, you can run until there are no warning messages and all jobs are defined on the correct workstation in the report (the CENT workstation). 4. Change the generated JCL (jobs) in the JOBCEN data set (created by the migration tool); for example, it could be necessary to change the comments line from //* to //* OPC. Note: If you plan to run the migration tool several times, you should copy the job to another library when it has been changed and is ready for the switch to avoid it being replaced by a new run of the migration tool. 5. The copied and amended members (jobs) can be activated one by one when the corresponding operation in the Tivoli Workload Scheduler for z/OS application is changed from the tracker agent workstation to the fault-tolerant workstation. 284 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 301. Define the non-centralized script If you decide to use non-centralized script for all or some of the tracker agent jobs, do the following: 1. Run the job migration tool for each tracker agent workstation and analyze the generated report. 2. Run the job migration tool as many times as you want. For example, you can run until there are no warning messages and all jobs are defined on the correct workstation in the report (the DIST workstation). 3. Transfer the scripts from the JOBDIS data set (created by the migration tool) to the distributed agents. 4. Create a member in the script library (SCRPTLIB/EQQSCLIB) for every job in the JOBDIS data set and, optionally, for the jobs in JOBCEN (if you decide to change these jobs from use of centralized script to use of non-centralized script). Note: The job submit exit EQQUX001 is not called for non-centralized script jobs. Define the user and password for Windows FTWs For each user running jobs on Windows fault-tolerant agents, define a new USRREC statement to provide the Windows user and password. USRREC is defined in the member of the EQQPARM library as specified by the USERMEM keyword in the TOPOLOGY statement. Important: Because the passwords are not encrypted, we strongly recommend that you protect the data set containing the USRREC definitions with your security product. If you use the impersonation support for NT tracker agent workstations, it does not interfere with the USRREC definitions. The impersonation support assigns a user ID based on the user ID from exit EQQUX001. Since the exit is not called for jobs with non-centralized script, impersonation support is not used any more. Change the workstation name inside the job streams At this point in the migration, the end-to-end scheduling environment should be active and the fault-tolerant workstations on the systems with tracker agents should be active and linked in the plan in Tivoli Workload Scheduler for z/OS. The plan in Tivoli Workload Scheduler for z/OS and the Symphony file on the fault-tolerant agents does not contain any job streams with jobs that are scheduled on the tracker agent workstations. Chapter 5. End-to-end implementation scenarios and examples 285
  • 302. The job streams (applications) in the Tivoli Workload Scheduler for z/OS controller are still pointing to the tracker agent workstations. In order to submit workload to the distributed environment, you must change the workstation name of your existing job definitions to the new FTW, or define new job streams to replace the job streams with the old tracker agent jobs. Notes: It is not possible to change the workstation within a job instance from tracker agent to a fault-tolerant workstation via the Job Scheduling Console. We have already addressed this issue of development. The change can be performed via the legacy GUI (ISPF) and the batch loader program. Be aware that changes to the workstation affect only the job stream database. If you want to take this modification into the plans, you must run a long term-plan (LTP) modify all batch job and current plan extend or replan batch job. The highest acceptable return code for operations on fault-tolerant workstations is 0. If you have a tracker agent operation with highest return code set to 8 and you change the workstation for this operation from a tracker agent workstation to a fault-tolerant workstation, you will not be able to save the modified application. When trying to save the application you will see error message: EQQA531E Inconsistent option when FT work station Be aware of this if you are planning to use Tivoli Workload Scheduler for z/OS mass update functions or unload/reload functions to update a large number of applications. Parallel testing If possible, do some parallel testing before the cutover. With parallel testing, you run the same job flow on both types of workstations: tracker agent workstations and fault-tolerant workstations. The only problem with parallel testing is that it requires duplicate versions of the applications (job streams): one application for the tracker agent and one application for the fault-tolerant workstation. Also, you cannot run the same job in both applications, so one of the jobs must be changed to a dummy job. Some initial setup is required to do parallel testing, but when done it will be possible to verify that the jobs are executed in same sequence and that operators and planners can gain some experience with the new environment. 286 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 303. Another approach could be to migrate a few applications from tracker agents to fault-tolerant agents and use these applications to verify the migration strategy, the migrated jobs (JCL/script), and get some experience. When you are satisfied with the test result of these applications, the next step is to migrate the rest of the applications. Perform the cutover When the parallel testing has been completed with satisfactory results, you can do the final cutover. For example, the process can be: Change all workstation names from tracker agent workstation to fault-tolerant workstation for all operations in the Tivoli Workload Scheduler for z/OS controller. This can be done with the Tivoli Workload Scheduler for z/OS mass update function or by the unload (with the Batch Command Interface Tool) edit, and batchload (with Tivoli Workload Scheduler for z/OS batchloader) process. Run the Extend of long-term plan batch job or Modify All of long-term plan in Tivoli Workload Scheduler for z/OS. Verify that the changed applications and operations look correct in the long-term plan. Run the Extend of plan (current plan) batch job. – Verify that the changed applications and operations look correct in the plan. – Verify that the tracker agent jobs have been moved to the new fault-tolerant workstations and that there are no jobs on the tracker agent workstations. Education and training of operators and planners Tracker agents and fault-tolerant workstations work differently and there are new options related to jobs on fault-tolerant workstations. Handling of fault-tolerant workstations is different from handling for tracker agent. A tracker agent workstation can be set to Active or Offline and can be defined with open intervals and servers. A fault-tolerant workstation can be started, stopped, linked, or unlinked. To ensure that the migration from tracker agents to fault-tolerant workstations will be successful, be sure to plan for education of your planners and operators. Chapter 5. End-to-end implementation scenarios and examples 287
  • 304. 5.3.5 Migrating backward Normally, it should not be necessary to migrate backward because it is possible to run the two environments in parallel. As we have shown, you can run a tracker agent and a fault-tolerant agent on the same physical machine. If the preparation, planning, and testing of the migration is done as described in the previous chapters, it should not be necessary to migrate backward. If a situation forces backward migration from the fault-tolerant workstations to tracker agents, follow these steps: 1. Install the tracker agent on the computer. (This is necessary only if you have uninstalled the tracker agent.) 2. Define a new destination in the ROUTOPTS initialization statement of the controller and restart the controller. 3. Make a duplicate of the workstation definition of the computer. Define the new workstation as Computer Automatic instead of Fault Tolerant and specify the destination you defined in step 2. This way, the same computer can be run as a fault-tolerant workstation and as a tracker agent for smoother migration. 4. For non-centralized scripts, copy the scripts from the fault-tolerant workstation repository to the JOBLIB. As an alternative, copy the script to a local directory that can be accessed by the tracker agent and create a JOBLIB member to execute the script. You can accomplish this by using FTP. 5. Implement the EQQUX001 sample to execute jobs with the correct user ID. 6. Modify the workstation name inside the operation. Remember to change the JOBNAME if the member in the JOBLIB has a name different from the member of the script library. 5.4 Conversion from Tivoli Workload Scheduler network to Tivoli Workload Scheduler for z/OS managed network In this section, we outline the guidelines for converting a Tivoli Workload Scheduler network to a Tivoli Workload Scheduler for z/OS managed network. The distributed Tivoli Workload Scheduler network is managed by a Tivoli Workload Scheduler master domain manager, which manages the databases and the plan. Converting the Tivoli Workload Scheduler managed network to a Tivoli Workload Scheduler for z/OS managed network means that responsibility for database and plan management move from the Tivoli Workload Scheduler master domain manager to the Tivoli Workload Scheduler for z/OS engine. 288 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 305. 5.4.1 Illustration of the conversion process Figure 5-5 shows a distributed Tivoli Workload Scheduler network. The database management and daily planning are carried out by the Tivoli Workload Scheduler master domain manager. MASTERDM AIX Master Domain Manager DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows 2000 Solaris Figure 5-5 Tivoli Workload Scheduler distributed network with a master domain manager Figure 5-6 shows a Tivoli Workload Scheduler for z/OS managed network. Database management and daily planning are carried out by the Tivoli Workload Scheduler for z/OS engine. Chapter 5. End-to-end implementation scenarios and examples 289
  • 306. Standby Standby Engine Engine z/OS SYSPLEX Active Engine Figure 5-6 Tivoli Workload Scheduler for z/OS network The conversion process is to change the Tivoli Workload Scheduler master domain manager to the first-level domain manager and then connect it to the Tivoli Workload Scheduler for z/OS engine (new master domain manager). The result of the conversion is a new end-to-end network managed by the Tivoli Workload Scheduler for z/OS engine (Figure 5-7 on page 291). 290 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 307. MASTERDM Standby Standby Master DM Master DM z/OS Sysplex Master Domain Manager DomainZ AIX Domain Manager DMZ DomainA DomainB AIX HPUX Domain Domain Manager Manager DMA DMB FTA1 FTA2 FTA3 FTA4 AIX OS/400 Windows 2000 Solaris Figure 5-7 IBM Tivoli Workload Scheduler for z/OS managed end-to-end network 5.4.2 Considerations before doing the conversion Before you start to convert your Tivoli Workload Scheduler managed network to a Tivoli Workload Scheduler for z/OS managed network, you should evaluate the positives and negatives of doing the conversion. The pros and cons of doing the conversion will differ from installation to installation. Some installations will gain significant benefits from conversion, while other installations will gain fewer benefits. Based on the outcome of the evaluation of pros and cons, it should be possible to make the right decision for your specific installation and current usage of Tivoli Workload Scheduler as well as Tivoli Workload Scheduler for z/OS. Chapter 5. End-to-end implementation scenarios and examples 291
  • 308. Some important aspects of the conversion that you should consider are: How is your Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS organization today? – Do you have two independent organizations working independently of each other? – Do you have two groups of operators and planners to manage Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS? – Or do you have one group of operators and planners that manages both the Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS environments? – Do you use considerable resources keeping a high skill level for both products, Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS? How integrated is the workload managed by Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS? – Do you have dependencies between jobs in Tivoli Workload Scheduler and in Tivoli Workload Scheduler for z/OS? – Or do most of the jobs in one workload scheduler run independently of jobs in the other scheduler? – Have you already managed to solve dependencies between jobs in Tivoli Workload Scheduler and in Tivoli Workload Scheduler for z/OS efficiently? The current use of Tivoli Workload Scheduler–specific functions are not available in Tivoli Workload Scheduler for z/OS. – How intensive is the use of prompts, file dependencies, and “repeat range” (run job every 10 minutes) in Tivoli Workload Scheduler? Can these Tivoli Workload Scheduler–specific functions be replaced by Tivoli Workload Scheduler for z/OS–specific functions or should they be handled in another way? Does it require some locally developed tools, programs, or workarounds? – How extensive is the use of Tivoli Workload Scheduler job recovery definitions? Is it possible to handle these Tivoli Workload Scheduler recovery definitions in another way when the job is managed by Tivoli Workload Scheduler for z/OS? Does it require some locally developed tools, programs, or workarounds? 292 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 309. Will Tivoli Workload Scheduler for z/OS give you some of the functions you are missing in Tivoli Workload Scheduler today? – Extended planning capabilities, long-term plan, current plan that spans more than 24 hours? – Better handling of carry-forward job streams? – Powerful run-cycle and calendar functions? Which platforms or systems are going to be managed by the Tivoli Workload Scheduler for z/OS end-to-end scheduling? What kind of integration do you have between Tivoli Workload Scheduler and, for example, SAP R/3, PeopleSoft, or Oracle Applications? Partial conversion of some jobs from the Tivoli Workload Scheduler-managed network to the Tivoli Workload Scheduler for z/OS managed network? Partial conversion: About 15% of your Tivoli Workload Scheduler–managed jobs or workload is directly related to the Tivoli Workload Scheduler for z/OS jobs or workload. This means that the Tivoli Workload Scheduler jobs are either predecessors or successors to Tivoli Workload Scheduler for z/OS jobs. The current handling of these interdependencies is not effective or stable with your current solution. Converting the 15% of jobs to Tivoli Workload Scheduler for z/OS managed scheduling using the end-to-end solution will stabilize dependency handling and make scheduling more reliable. Note that this requires two instances of Tivoli Workload Scheduler workstations (one each for Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS). Effort to convert Tivoli Workload Scheduler database object definitions to Tivoli Workload Scheduler for z/OS database object definitions. Will it be possible to convert the database objects with reasonable resources and within a reasonable time frame? 5.4.3 Conversion process from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS The process of converting from a Tivoli Workload Scheduler-managed network to a Tivoli Workload Scheduler for z/OS-managed network has several steps. In the following description, we assume that we have an active Tivoli Workload Scheduler for z/OS environment as well as an active Tivoli Workload Scheduler environment. We also assume that the Tivoli Workload Scheduler for z/OS Chapter 5. End-to-end implementation scenarios and examples 293
  • 310. end-to-end server is installed and ready for use. The conversion process mainly contains the following steps or tasks: 1. Plan the conversion and establish new naming standards. 2. Install new Tivoli Workload Scheduler workstation instances dedicated to communicating with the Tivoli Workload Scheduler for z/OS server. 3. Define the topology of the Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS and define associated Tivoli Workload Scheduler for z/OS fault-tolerant workstations. 4. Create JOBSCR members (in the SCRPTLIB data set) for all Tivoli Workload Scheduler–managed jobs that should be converted. 5. Convert the database objects from Tivoli Workload Scheduler format to Tivoli Workload Scheduler for z/OS format. 6. Educate planners and operators in the new Tivoli Workload Scheduler for z/OS server functions. 7. Test and verify the conversion and finalize for production. The sequencing of these steps may be different in your environment, depending on the strategy that you will follow when doing your own conversion. Step 1. Planning the conversion The conversion from Tivoli Workload Scheduler-managed scheduling to Tivoli Workload Scheduler for z/OS-managed scheduling can be a major project and requires several resources. It depends on the current size and usage of the Tivoli Workload Scheduler environment. Planning of the conversion is an important task and can be used to estimate the effort required to do the conversion as well as detail the different conversion steps. In the planning phase you should try to identify special usage of Tivoli Workload Scheduler functions or facilities that are not easily converted to Tivoli Workload Scheduler for z/OS. Furthermore, you should try to outline how these functions or facilities should be handled when scheduling is done by Tivoli Workload Scheduler for z/OS. Part of planning is also establishing the new naming standards for all or some of the Tivoli Workload Scheduler objects that are going to be converted. Some examples: Naming standards for the fault-tolerant workstations in Tivoli Workload Scheduler for z/OS Names for workstations can be up to 16 characters in Tivoli Workload Scheduler (if you are using expanded databases). In Tivoli Workload Scheduler for z/OS, workstation names can be up to four characters. This 294 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 311. means you have to establish a new naming standard for the fault-tolerant workstations in Tivoli Workload Scheduler for z/OS. Naming standards for job names In Tivoli Workload Scheduler you can specify job names with lengths of up to 40 characters (if you are using expanded databases). In Tivoli Workload Scheduler for z/OS, job names can be up to eight characters. This means that you have to establish a new naming standard for jobs on fault-tolerant workstations in Tivoli Workload Scheduler for z/OS. Adoption of the existing Tivoli Workload Scheduler for z/OS object naming standards You probably already have naming standards for job streams, workstations, job names, resources, and calendars in Tivoli Workload Scheduler for z/OS. When converting Tivoli Workload Scheduler database objects to the Tivoli Workload Scheduler for z/OS databases, you must adopt the Tivoli Workload Scheduler for z/OS naming standard. Access to the objects in Tivoli Workload Scheduler for z/OS database and plan Access to Tivoli Workload Scheduler for z/OS databases and plan objects are protected by your security product (for example, RACF). Depending on the naming standards for the imported Tivoli Workload Scheduler objects, you may need to modify the definitions in your security product. Is the current Tivoli Workload Scheduler network topology suitable and can it be implemented directly in a Tivoli Workload Scheduler for z/OS server? Maybe the current Tivoli Workload Scheduler network topology needs some adjustments as it is implemented today to be optimal. If your Tivoli Workload Scheduler network topology is not optimal, it should be reconfigured when implemented in Tivoli Workload Scheduler for z/OS end-to-end. Step 2. Install Tivoli Workload Scheduler workstation instances for Tivoli Workload Scheduler for z/OS With Tivoli Workload Scheduler workstation instances we mean installation and configuration of a new Tivoli Workload Scheduler engine. This engine should be configured to be a domain manager, fault-tolerant agent, or a backup domain manager, according to the Tivoli Workload Scheduler production environment you are going to mirror. Following this approach, you will have two instances on all the Tivoli Workload Scheduler managed systems: 1. One old Tivoli Workload Scheduler workstation instance dedicated to the Tivoli Workload Scheduler master. Chapter 5. End-to-end implementation scenarios and examples 295
  • 312. 2. One new Tivoli Workload Scheduler workstation instance dedicated to the Tivoli Workload Scheduler for z/OS engine (master). Remember to use different port numbers. By creating dedicated Tivoli Workload Scheduler workstation instances for Tivoli Workload Scheduler for z/OS scheduling, you can start testing the new environment without disturbing the distributed Tivoli Workload Scheduler production environment. This also makes it possible to do partial conversion, testing, and verification without interfering with the Tivoli Workload Scheduler production environment. You can chose different approaches for the conversion: Try to group your Tivoli Workload Scheduler job streams and jobs into logical and isolated groups and then convert them, group by group. Convert all job streams and jobs, run some parallel testing and verification, and then do the switch from Tivoli Workload Scheduler–managed scheduling to Tivoli Workload Scheduler for z/OS–managed scheduling in one final step. The suitable approach differs from installation to installation. Some installations will be able to group job streams and jobs into isolated groups, while others will not. You have to decide the strategy for the conversion based on your installation. Note: If you decide to reuse the Tivoli Workload Scheduler distributed workstation instances in your Tivoli Workload Scheduler for z/OS managed network, this is also possible. You may decide to move the distributed workstations one by one (depending on how you have grouped your job streams and how you are doing the conversion). When a workstation is going to be moved to Tivoli Workload Scheduler for z/OS, you simply change the port number in the localops file on the Tivoli Workload Scheduler workstation. The workstation will then be active in Tivoli Workload Scheduler for z/OS at the next plan extension, replan, or redistribution of the Symphony file. (Remember to create the associated DOMREC and CPUREC definitions in the Tivoli Workload Scheduler for z/OS initialization statements.) Step 3. Define topology of Tivoli Workload Scheduler network in Tivoli Workload Scheduler for z/OS The topology for your Tivoli Workload Scheduler distributed network can be implemented directly in Tivoli Workload Scheduler for z/OS. This is done by creating the associated DOMREC and CPUREC definitions in the Tivoli Workload Scheduler for z/OS initialization statements. To activate the topology definitions, create the associated definitions for fault-tolerant workstations in the Tivoli Workload Scheduler for z/OS workstation 296 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 313. database. Tivoli Workload Scheduler for z/OS extend or replan will activate these new workstation definitions. If you are using a dedicated Tivoli Workload Scheduler workstation for Tivoli Workload Scheduler for z/OS scheduling, you can create the topology definitions in an early stage of the conversion process. This way you can: Verify that the topology definitions are correct in Tivoli Workload Scheduler for z/OS. Verify that the dedicated fault-tolerant workstations are linked and available. Start getting some experience with the management of fault-tolerant workstations and a distributed Tivoli Workload Scheduler network. Implement monitoring and handling routines in your automation application on z/OS. Step 4. Create JOBSCR members for all Tivoli Workload Scheduler–managed jobs Tivoli Workload Scheduler-managed jobs that should be converted to Tivoli Workload Scheduler for z/OS must be defined in the SCRPTLIB data set. For every active job defined in the Tivoli Workload Scheduler database, you define a member in the SCRPTLIB data set containing: Name of the script or command for the job (defined in the JOBREC JOBSCRS() or the JOBREC JOBCMD specification) Name of the user ID that the job should execute under (defined in the JOBREC JOBUSR() specification) Note: If the same job script is going to be executed on several systems (it is defined on several workstations in Tivoli Workload Scheduler), you only have to create one member in the SCRPTLIB data set. This job (member) can be defined on several fault-tolerant workstations in several job streams in Tivoli Workload Scheduler for z/OS. It requires that the script is placed in a common directory (path) across all systems. Step 5. Convert database objects from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS Tivoli Workload Scheduler database objects that should be converted — job streams, resources, and calendars — probably cannot be converted directly to Tivoli Workload Scheduler for z/OS. In this case you must amend the Tivoli Workload Scheduler database objects to Tivoli Workload Scheduler for z/OS Chapter 5. End-to-end implementation scenarios and examples 297
  • 314. format and create the corresponding objects in the respective Tivoli Workload Scheduler for z/OS databases. Pay special attention to object definitions such as: Job stream run-cycles for job streams and use of calendars in Tivoli Workload Scheduler Use of local (workstation-specific) resources in Tivoli Workload Scheduler (local resources converted to global resources by the Tivoli Workload Scheduler for z/OS master) Jobs defined with “repeat range” (for example, run every 10 minutes in job streams) Job steams defined with dependencies on job stream level Jobs defined with Tivoli Workload Scheduler recovery actions For these object definitions, you have to design alternative ways of handling for Tivoli Workload Scheduler for z/OS. Step 6. Education for planners and operators Some of the handling of distributed Tivoli Workload Scheduler jobs in Tivoli Workload Scheduler for z/OS will be different from the handling in Tivoli Workload Scheduler. Also, some specific fault-tolerant workstation features will be available in Tivoli Workload Scheduler for z/OS. You should plan for the education of your operators and planners so that they have knowledge of: How to define jobs and job streams for the Tivoli Workload Scheduler fault-tolerant workstations Specific rules to be followed for scheduling objects related to fault-tolerant workstations How to handle jobs and job streams on fault-tolerant workstations How to handle resources for fault-tolerant workstations The implications of doing, for example, Symphony redistribution How Tivoli Workload Scheduler for z/OS end-to-end scheduling works (engine, server, domain managers) How the Tivoli Workload Scheduler network topology has been adopted in Tivoli Workload Scheduler for z/OS 298 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 315. Step 7. Test and verify conversion and finalize for production After testing your approach for the conversion, doing some trial conversions, and testing the conversion carefully, it is time to do the final conversion to Tivoli Workload Scheduler for z/OS. The goal is to reach this final conversion and switch from Tivoli Workload Scheduler scheduling to Tivoli Workload Scheduler for z/OS scheduling within a reasonable time frame and with a reasonable level of errors. If the period when you are running Tivoli Workload Scheduler and Tivoli Workload Scheduler for z/OS will be too long, your planners and operators must handle two environments in this period. This is not effective and can cause some frustration for both planners and operators. The key to a successful conversion is good planning, testing, and verification. When you are comfortable with the testing and verification it is safe to do the final conversion and finalize for production. Tivoli Workload Scheduler for z/OS will then handle the central and the distributed workload, and you will have one focal point for your workload. The converted Tivoli Workload Scheduler production environment can be stopped. 5.4.4 Some guidelines to automate the conversion process If you have a large Tivoli Workload Scheduler scheduling environment, doing manual conversion will be too time-consuming. In this case you should consider trying to automate some or all of the conversion from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS. One obvious place to automate is the conversion of Tivoli Workload Scheduler database objects to Tivoli Workload Scheduler for z/OS database objects. Although this is not a trivial task, some automation can be implemented. Automation requires some locally developed tools or programs to handle conversion of the database objects. Some guidelines to helping automate the conversion process are: Create text copies of all the Tivoli Workload Scheduler database objects by using the composer create command (Example 5-7). Example 5-7 Tivoli Workload Scheduler objects creation composer create calendars.txt from CALENDARS composer create workstations.txt from CPU=@ composer create jobdef.txt from JOBS=@#@ composer create jobstream.txt from SCHED=@#@ composer create parameter.txt from PARMS composer create resources.txt from RESOURCES Chapter 5. End-to-end implementation scenarios and examples 299
  • 316. composer create prompts.txt from PROMPTS composer create users.txt from USERS=@#@ These text files are a good starting point when trying to estimate the effort and time for conversion from Tivoli Workload Scheduler to Tivoli Workload Scheduler for z/OS. Use the workstations.txt file when creating the topology definitions (DOMREC and CPUREC) in Tivoli Workload Scheduler for z/OS. Creating the topology definitions in Tivoli Workload Scheduler for z/OS based on the workstations.txt file is quite straightforward. The task can be automated coding using a program (script or REXX) that reads the worksations.txt file and converts the definitions to DOMREC and CPUREC specifications. Restriction: Tivoli Workload Scheduler CPU class definitions cannot be converted directly to similar definitions in Tivoli Workload Scheduler for z/OS. Use the jobdef.txt file when creating the SCRPTLIB members. In jobdef.txt, you have the workstation name for the script (used in the job stream definition), the script name (goes to the JOBREC JOBSCR() definiton), the stream logon (goes to the JOBREC JOBUSR() definition), the description (can be added as comments in the SCRPTLIB member), and the recovery definition. The recovery definition needs special consideration because it cannot be converted to Tivoli Workload Scheduler for z/OS auto-recovery. Here you need to make some workarounds. Use of Tivoli Workload Scheduler CPU class definitions need special consideration. The job definitions using CPU classes probably have to be copied to separate workstation-specific job definitions in Tivoli Workload Scheduler for z/OS. The task can be automated coding using a program (scripts or REXX) that reads the jobdef.txt file and converts each job definition to a member in the SCRPTLIB. If you have many Tivoli Workload Scheduler job definitions, having a program that can help automate this task can save a considerable amount of time. The users.txt file (if you have Windows NT/2000 jobs) is converted to USRREC initialization statements on Tivoli Workload Scheduler for z/OS. Be aware that the password for the user IDs is encrypted in the users.txt file, so you cannot automate the conversion right away. You must get the password as it is defined on the Windows workstations and type it in the USRREC USRPSW() definition. The jobstream.txt file is used to generate corresponding job streams in Tivoli Workload Scheduler for z/OS. The calendars.txt file is used in connection with 300 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 317. the jobstream.txt file when generating run cycles for the job streams in Tivoli Workload Scheduler for z/OS. It could be necessary to create additional calendars in Tivoli Workload Scheduler for z/OS. When doing the conversion, you should notice that: – Some of the Tivoli Workload Scheduler job stream definitions cannot be converted directly to Tivoli Workload Scheduler for z/OS job stream definitions (for example: prompts, workstation specific resources, file dependencies, and jobs with repeat range). For these definitions you must analyze the usage and find other ways to implement similar functions when using Tivoli Workload Scheduler for z/OS. – Some of the Tivoli Workload Scheduler job stream definitions must be amended to Tivoli Workload Scheduler for z/OS definitions. For example: • Dependencies on job stream level (use dummy start and end jobs in Tivoli Workload Scheduler for z/OS for job stream dependencies). Note that dependencies are also dependencies to prompts, file dependencies, and resources. • Tivoli Workload Scheduler job and job stream priority (0 to 101) must be amended to Tivoli Workload Scheduler for z/OS priority (1 to 9). Furthermore, priority in Tivoli Workload Scheduler for z/OS is always on job stream level. (It is not possible to specify priority on job level.) • Job stream run cycles (and calendars) must be converted to Tivoli Workload Scheduler for z/OS run cycles (and calendars). – Description texts longer than 24 characters are not allowed for job streams or jobs in Tivoli Workload Scheduler for z/OS. If you have Tivoli Workload Scheduler job streams or jobs with more than 24 characters of description text, you should consider adding this text as Tivoli Workload Scheduler for z/OS operator instructions. If you have a large number of Tivoli Workload Scheduler job streams, manual handling of job streams can be too time-consuming. The task can be automated to a certain extend coding program (script or REXX). A good starting point is to code a program that identifies all areas where you need special consideration or action. Use the output from this program to estimate the effort of doing the conversion. Further, the output can be used to identify and group used Tivoli Workload Scheduler functions where special workarounds must be performed when converting to Tivoli Workload Scheduler for z/OS. Chapter 5. End-to-end implementation scenarios and examples 301
  • 318. The program can be further refined to handle the actual conversion, performing the following steps: – Read all of the text files. – Analyze the job stream and job definitions. – Create corresponding Tivoli Workload Scheduler for z/OS job streams with amended run cycles and jobs. – Generate a file with Tivoli Workload Scheduler for z/OS batch loader statements for job streams and jobs (batch loader statements are Tivoli Workload Scheduler for z/OS job stream definitions in a format that can be loaded directly into the Tivoli Workload Scheduler for z/OS databases). The batch loader file can be sent to the z/OS system and used as input to the Tivoli Workload Scheduler for z/OS batch loader program. The Tivoli Workload Scheduler for z/OS batch loader will read the file (data set) and create the job streams and jobs defined in the batch loader statements. The resources.txt file is used to define the corresponding resources in Tivoli Workload Scheduler for z/OS. Remember that local (workstation-specific) resources are not allowed in Tivoli Workload Scheduler for z/OS. This means that the Tivoli Workload Scheduler workstation-specific resources will be converted to global special resources in Tivoli Workload Scheduler for z/OS. The Tivoli Workload Scheduler for z/OS engine is directly involved when resolving a dependency to a global resource. A fault-tolerant workstation job must interact with the Tivoli Workload Scheduler for z/OS engine to resolve a resource dependency. This can jeopardize the fault tolerance in your network. The use of parameters in the parameters.txt file must be analyzed. What are the parameters used for? – Are the parameters used for date calculations? – Are the parameters used to pass information from one job to another job (using the Tivoli Workload Scheduler parms command)? – Are the parameters used as parts of job definitions, for example, to specify where the script is placed? Depending on how you used the Tivoli Workload Scheduler parameters, there will be different approaches when converting to Tivoli Workload Scheduler for z/OS. Unless you use parameters as part of Tivoli Workload Scheduler object definitions, you usually do not have to do any conversion. Parameters will still work after the conversion. You have to copy the parameter database to the home directory of the Tivoli Workload Scheduler fault-tolerant workstations. The parms command can still 302 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 319. be used locally on the fault-tolerant workstation when managed by Tivoli Workload Scheduler for z/OS. We will show how to use Tivoli Workload Scheduler parameters in connection with Tivoli Workload Scheduler for z/OS JCL variables. This is a way to pass values for Tivoli Workload Scheduler for z/OS JCL variables to Tivoli Workload Scheduler parameters so that they can be used locally on the fault-tolerant workstation. 5.5 Tivoli Workload Scheduler for z/OS end-to-end fail-over scenarios In this section, we describe how to make the Tivoli Workload Scheduler for z/OS end-to-end environment fail-safe and plan for system outages. We also show some fail-over scenario examples. To make your Tivoli Workload Scheduler for z/OS end-to-end environment fail-safe, you have to: Configure Tivoli Workload Scheduler for z/OS backup engines (also called hot standby engines) in your sysplex. If you do not run sysplex, but have more than one z/OS system with shared DASD, then you should make sure that the Tivoli Workload Scheduler for z/OS engine can be moved from one system to another without any problems. Configure your z/OS systems to use a virtual IP address (VIPA). VIPA is used to make sure that the Tivoli Workload Scheduler for z/OS end-to-end server always gets the same IP address no matter which z/OS system it is run on. VIPA assigns a system-independent IP address to the Tivoli Workload Scheduler for z/OS server task. If using VIPA is not an option, you should consider other ways of assigning a system-independent IP address to the Tivoli Workload Scheduler for z/OS server task. For example, this can be a hostname file, DNS, or stack affinity. Configure a backup domain manager for the first-level domain manager. Refer to the Tivoli Workload Scheduler for z/OS end-to-end configuration, shown in Figure 5-1 on page 266, for the fail-over scenarios. When the environment is configured to be fail-safe, the next step is to test that the environment actually is fail-safe. We did the following fail-over tests: Switch to the Tivoli Workload Scheduler for z/OS backup engine. Switch to the Tivoli Workload Scheduler backup domain manager. Chapter 5. End-to-end implementation scenarios and examples 303
  • 320. 5.5.1 Configure Tivoli Workload Scheduler for z/OS backup engines To ensure that the Tivoli Workload Scheduler for z/OS engine will be started, either as active engine or standby engine, we specify: OPCOPTS OPCHOST(PLEX) In the initialization statements for the Tivoli Workload Scheduler for z/OS engine (pointed to by the member of the EQQPARM library as specified by the parm parameter on the JCL EXEC statement), OPCHOST(PLEX) means that the engine has to start as the controlling system. If there already is an active engine in the XCF group, the startup for the engine continues on standby engine. Note: OPCOPTS OPCHOST(YES) must be specified if you start the engine with an empty checkpoint data set. This could be the case the first time you start a newly installed engine or after you have migrated from a previous release of Tivoli Workload Scheduler for z/OS. OPCHOST(PLEX) is valid only when xcf group and member have been specified. Also, this selection requires that Tivoli Workload Scheduler for z/OS is running on a z/OS/ESA Version 4 Release 1 or later. Because we are running z/OS 1.3 we can use the OPCHOST PLEX(YES) definition. We specify Example 5-8 in the xcf group and member definitions for the engine. Example 5-8 xcf group and member definitions XCFOPTS GROUP(TWS820) MEMBER(TWSC&SYSNAME.) /* TAKEOVER(SYSFAIL,HOSTFAIL) Do takeover manually !! */ Tip: We use the z/OS sysplex-wide SYSNAME variable when specifying the member name for the engine in the sysplex. Using z/OS variables this way, we can have common Tivoli Workload Scheduler for z/OS parameter member definitions for all our engines (and agents as well). For example, when the engine is started on SC63, the MEMBER(TWSC&SYSNAME) will be MEMBER(TWSCSC63). You must have unique member names for all your engines (active and standby) running in the same sysplex. We assure this by using the sysname variable. 304 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 321. Tip: We have not activated the TAKEOVER(SYSFAIL,HOSTFAIL) parameter in XCFOPTS because we do not want the engine to switch automatically to one of its backup engines in case the active engine fails or the system fails. Because we have not specified the TAKEOVER parameter, we are making the switch to one of the backup engines manually. The switch is made by issuing the following modify command on the z/OS system where you want the backup engine to take over: F TWSC,TAKEOVER In this example, TWSC is the name of our Tivoli Workload Scheduler for z/OS backup engine started task (same name on all systems in the sysplex). The takeover can be managed by SA/390, for example. This way SA/390 can integrate the switch to a backup engine with other automation tasks in the engine or on the system. We did not define a Tivoli Workload Scheduler for z/OS APPC server task for the Tivoli Workload Scheduler for z/OS panels and PIF programs, as described in “Remote panels and program interface applications” on page 31, but it is strongly recommended that you use a Tivoli Workload Scheduler for z/OS APPC server task in sysplex environments where the engine can be moved to different systems in the sysplex. If you do not use the Tivoli Workload Scheduler for z/OS APPC server task you must log off and then log on to the system where the engine is active. This can be avoided by using the Tivoli Workload Scheduler for z/OS APPC server task. 5.5.2 Configure DVIPA for Tivoli Workload Scheduler for z/OS end-to-end server To make sure that the engine can be moved from SC64 to either SC63 or SC65, Dynamic VIPA is used to define the IP address for the server task. This DVIPA IP address is defined in the profile data set pointed to by PROFILE DD-card in the TCPIP started task. The VIPA definition that is used to define logical sysplex-wide IP addresses for the Tivoli Workload Scheduler for z/OS end-to-end server, engine JSC server, is as shown in Example 5-9. Example 5-9 The VIPA definition VIPADYNAMIC viparange define 255.255.255.248 9.12.6.104 ENDVIPADYNAMIC Chapter 5. End-to-end implementation scenarios and examples 305
  • 322. PORT 424 TCP TWSC BIND 9.12.6.105 5000 TCP TWSCJSC BIND 9.12.6.106 31282 TCP TWSCE2E BIND 9.12.6.107 In this example, the first column under PORT is the port number, the third column is the name of the started task, and the fifth column is the logical sysplex-wide IP address. Port 424 is used for the Tivoli Workload Scheduler for z/OS tracker agent IP address, port 5000 for the Tivoli Workload Scheduler for z/OS JSC server task, and port 31282 is used for the Tivoli Workload Scheduler for z/OS end-to-end server task. With these VIPA definitions, we have made a relation between port number, started task name, and the logical IP address that can be used sysplex-wide. The TWSCE2E host name and 31282 port number that is used for the Tivoli Workload Scheduler for z/OS end-to-end server is defined in the TOPLOGY HOSTNAME(TWSCE2E) initialization statement used by the TWSCE2E server and Tivoli Workload Scheduler for z/OS plan programs. When the Tivoli Workload Scheduler for z/OS engine creates the Symphony file, the TWSCE2E host name and 31282 port number will be part of the Symphony file. The first-level domain manager (U100) and the backup domain manager (F101) will use this host name when they establish outbound IP connections to the Tivoli Workload Scheduler for z/OS server. The backup domain manager only establishes outbound IP connections to the Tivoli Workload Scheduler for z/OS server if it is going to take over the responsibilities for the first-level domain manager. 5.5.3 Configure backup domain manager for first-level domain manager Note: The examples and text below refer to a different end-to-end scheduling network, so the names of workstations are different than in the rest of the redbook. This section is included here mostly unchanged from End-to-End Scheduling with Tivoli Workload Scheduler 8.1, SG24-6022, because the steps to switch to a backup domain manager are the same in Version 8.2 as they were in Version 8.1. One additional option that is available with Tivoli Workload Scheduler 8.2 is to use the WSSTAT command instead of the Job Scheduling Console to do the switch (from backup domain manager to first-level domain manager). This method is also shown in this scenario, in addition to the GUI method. 306 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 323. In this section, we show how to configure a backup domain manager for a first-level domain manager. In this scenario, we have F100 FTA configured as the first-level domain manager and F101 FTA configured as the first-level domain manager. The initial DOCREC definitions in Example 5-10 show that F100 (in bold) is defined as the first-level domain manager. Example 5-10 DOMREC definitions /**********************************************************************/ /* DOMREC: Defines the domains in the distributed Tivoli Workload */ /* Scheduler network */ /**********************************************************************/ /*--------------------------------------------------------------------*/ /* Specify one DOMREC for each domain in the distributed network. */ /* With the exception of the master domain (whose name is MASTERDM */ /* and consist of the TWS for z/OS engine). */ /*--------------------------------------------------------------------*/ DOMREC DOMAIN(DM100) /* Domain name for 1st domain */ DOMMNGR(F100) /* Chatham FTA - domain manager */ DOMPARENT(MASTERDM) /* Domain parent is MASTERDM */ DOMREC DOMAIN(DM200) /* Domain name for 2nd domain */ DOMMNGR(F200) /* Yarmouth FTA - domain manager */ DOMPARENT(DM100) /* Domain parent is DM100 */ The F101 fault-tolerant agent can be configured to be the backup domain manager simply by specifying the Example 5-11 entries (in bold) in its CPUREC definition. Example 5-11 Configuring F101 to be the backup domain manager CPUREC CPUNAME(F101) CPUTCPIP(31758) CPUUSER(tws) CPUDOMAIN(DM100) CPUSERVER(1) CPUFULLSTAT(ON) /* Full status on for Backup DM */ CPURESDEP(ON) /* Resolve dep. on for Backup DM */ With CPUFULLSTAT (full status information) and CPURESDEP (resolve dependency information) set to On, the Symphony file on F101 is updated with the same reporting and logging information as the Symphony file on F100. The backup domain manager will then be able to take over the responsibilities of the first-level domain manager. Chapter 5. End-to-end implementation scenarios and examples 307
  • 324. Note: FixPack 04 introduces a new Fault-Tolerant Switch Feature, which is described in a PDF file named FaultTolerantSwitch.README. The new Fault-Tolerant Switch Feature replaces and enhances the existing or traditional Fault-Tolerant Switch Manager for backup domain managers. 5.5.4 Switch to Tivoli Workload Scheduler backup domain manager This scenario is divided into two parts: A short-term switch to the backup manager By short-term switch, we mean that we have switched back to the original domain manager before the current plan is extended or replanned. A long-term switch By a long-term switch, we mean that the switch to the backup manager will be effective across the current plan extension or replan. Short-term switch to the backup manager In this scenario, we issue a switchmgr command on the F101 backup domain manager. We verify that the F101 takes over the responsibilities of the old first-level domain manager. The steps in the short-term switch scenario are: 1. Issue the switch command on the F101 backup domain manager. 2. Verify that the switch is done. Step 1. Issue switch command on F101 backup domain manager Before we do the switch, we check the status of the workstations from a JSC instance pointing to the first-level domain manager (Figure 5-8 on page 308). Figure 5-8 Status for workstations before the switch to F101 308 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 325. Note in Figure 5-8 that F100 is MANAGER (in the CPU Type column) for the DM100 domain. F101 is FTA (in the CPU Type column) in the DM100 domain. To simulate that the F100 first-level domain manager is down or unavailable due to a system failure, we issue the switch manager command on the F101 backup domain manager. The switch manager command is initiated from the conman command line on F101: conman switchmgr "DM100;F101" In this example, DM100 is the domain and F101 is the fault-tolerant workstation we are going to switch to. The F101 fault-tolerant workstation responds with the message shown in Example 5-12. Example 5-12 Messages showing switch has been initiated TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for group 'TWS-EndToEnd'. TWS for UNIX (AIX)/CONMAN 8.1 (1.36.1.3) Licensed Materials Property of IBM 5698-WKB (C) Copyright IBM Corp 1998,2001 US Government User Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Installed for group 'TWS-EndToEnd'. Locale LANG set to "en_US" Schedule (Exp) 02/27/02 (#107) on F101. Batchman LIVES. Limit: 20, Fence: 0, Audit Level: 0 switchmgr DM100;F101 AWS20710041I Service 2005 started on F101 AWS22020120 Switchmgr command executed from cpu F101 on cpu F101. This indicates that the switch has been initiated. It is also possible to initiate the switch from a JSC instance pointing to the F101 backup domain manager. Because we do not have a JSC instance pointing to the backup domain manager we use the conman switchmgr command locally on the F101 backup domain manager. Chapter 5. End-to-end implementation scenarios and examples 309
  • 326. For your information, we show how to initiate the switch from the JSC: 1. Double-click Status of all Domains in the Default Plan Lists in the domain manager JSC instance (TWSC-F100-Eastham) (Figure 5-9). Figure 5-9 Status of all Domains list 2. Right-click the DM100 domain for the context menu shown in Figure 5-10. Figure 5-10 Context menu for the DM100 domain 3. Select Switch Manager. The JSC shows a new pop-up window in which we can search for the agent we will switch to (Figure 5-11). 310 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 327. Figure 5-11 The Switch Manager - Domain search pop-up window 4. Click the search button (the square box with three dots to the right of the F100 domain shown in Figure 5-11), and JSC opens the Find Workstation Instance pop-up window (Figure 5-12 on page 311). Figure 5-12 JSC Find Workstation Instance window 5. Click Start (Figure 5-12). JSC opens a new pop-up window that contains all the fault-tolerant workstations in the network (Figure 5-13 on page 312). Chapter 5. End-to-end implementation scenarios and examples 311
  • 328. 6. If we specify a filter in the Find field (shown in Figure 5-12) this filter will be used to narrow the list of workstations that are shown. Figure 5-13 The result from Find Workstation Instance 7. Mark the workstation to switch to F101 (in our example) and click OK in the Find Workstation Instance window (Figure 5-13). 8. Click OK in the Switch Manager - Domain pop-up window to initiate the switch. Note that the selected workstation (F101) appears in the pop-up window (Figure 5-14). 312 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 329. Figure 5-14 Switch Manager - Domain pop-up window with selected FTA The switch to F101 is initiated and Tivoli Workload Scheduler performs the switch. Note: With Tivoli Workload Scheduler for z/OS 8.2, you can now switch the domain manager using the WSSTAT TSO command on the mainframe. The Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263 guide incorrectly states the syntax of this command. DOC APAR PQ93442 has been opened to correct the documentation. If you prefer to work with the JSC, the above method of switching will appeal to you. If you are a mainframe operator, you may prefer to perform this sort of task from the mainframe. The example below shows how to do switching using the WSSTAT TSO command instead. In Example 5-13, the workstation F101 is instructed to become the new domain manager of the DM100 domain. The command is sent via the TWST tracker subsystem. Example 5-13 Alternate method of switching domain manager, using WSSTAT command WSSTAT SUBSYS(TWST) WSNAME(F101) MANAGES(DM100) If you prefer to work with the UNIX or Windows command line, Example 5-14 shows how to run the switchmgr command from conman. Example 5-14 Alternate method of switching domain manager, using switchmgr conman ‘switchmgr DM100,F101’ Chapter 5. End-to-end implementation scenarios and examples 313
  • 330. Step 2. Verify that the switch is done We check the status for the workstation using the JSC pointing to the old first-level domain manager, F100 (Figure 5-15). Figure 5-15 Status for the workstations after the switch to F101 In Figure 5-15, it can be verified that F101 is now MANAGER (see CPU Type column) for the DM100 domain (the Domain column). The F100 is changed to an FTA (the CPU Type column). The OPCMASTER workstation has the status unlinked (as shown in the Link Status column in Figure 5-15 on page 314). This status is correct, as we are using the JSC instance pointing to the F100 workstation. The OPCMASTER has a linked status on F101, as expected. Switching to the backup domain manager takes some time, so be patient. The reason for this is that the switch manager command stops the backup domain manager and restarts it as the domain manager. All domain member fault-tolerant workstations are informed about the switch, and the old domain manager is converted to a fault-tolerant agent in the domain. The fault-tolerant workstations use the switch information to update their Symphony file with the name of the new domain manager. Then they stop and restart to link to the new domain manager. In rare occasions, the link status is not shown correctly in the JSC after a switch to the backup domain manager. If this happens, try to Link the workstation manually by right-clicking the workstation and clicking Link in the pop-up window. Note: To reactivate F100 as the domain manager, simply do a switch manager to F100 or run Symphony redistribute. The F100 will also be reinstated as the domain manager when you run the extend or replan programs. Long-term switch to the backup manager The identification of domain managers is placed in the Symphony file. If a switch domain manager command is issued, the old domain manager name will be replaced with the new (backup) domain manager name in the Symphony file. 314 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 331. If the switch to the backup domain manager is going to be effective across Tivoli Workload Scheduler for z/OS plan extension or replan, we have to update the DOMREC definition. This is also the case if we redistribute the Symphony file from Tivoli Workload Scheduler for z/OS. The plan program reads the DOMREC definitions and creates a Symphony file with domain managers and fault-tolerant agents accordingly. If the DOMREC definitions are not updated to reflect the switch to the backup domain manager, the old domain manager will automatically resume domain management possibilities. The steps in the long-term switch scenario are: 1. Issue the switch command on the F101 backup domain manager. 2. Verify that the switch is done. 3. Update the DOMREC definitions used by the TWSCE2E server and the Tivoli Workload Scheduler for z/OS plan programs. 4. Run the replan plan program in Tivoli Workload Scheduler for z/OS. 5. Verify that the switched F101 is still the domain manager. Step 1. Issue switch command on F101 backup domain manager The switch command is done as described in “Step 1. Issue switch command on F101 backup domain manager” on page 308. Step 2. Verify that the switch is done We check the status of the workstation using the JSC pointing to the old first-level domain manager, F100 (Figure 5-16). Figure 5-16 Status of the workstations after the switch to F101 From Figure 5-16 it can be verified that F101 is now MANAGER (see CPU Type column) for the DM100 domain (see the Domain column). F100 is changed to an FTA (see the CPU Type column). The OPCMASTER workstation has the status unlinked (see the Link Status column in Figure 5-16). This status is correct, as we are using the JSC instance Chapter 5. End-to-end implementation scenarios and examples 315
  • 332. pointing to the F100 workstation. The OPCMASTER has a linked status on F101, as expected. Step 3. Update the DOMREC definitions for server and plan program We update the DOMREC definitions, so F101 will be the new first-level domain manager (Example 5-15). Example 5-15 DOMREC definitions /**********************************************************************/ /* DOMREC: Defines the domains in the distributed Tivoli Workload */ /* Scheduler network */ /**********************************************************************/ /*--------------------------------------------------------------------*/ /* Specify one DOMREC for each domain in the distributed network. */ /* With the exception of the master domain (whose name is MASTERDM */ /* and consist of the TWS for z/OS engine). */ /*--------------------------------------------------------------------*/ DOMREC DOMAIN(DM100) /* Domain name for 1st domain */ DOMMNGR(F101) /* Chatham FTA - domain manager */ DOMPARENT(MASTERDM) /* Domain parent is MASTERDM */ DOMREC DOMAIN(DM200) /* Domain name for 2nd domain */ DOMMNGR(F200) /* Yarmouth FTA - domain manager */ DOMPARENT(DM100) /* Domain parent is DM100 */ The DOMREC DOMNGR(F101) defines the name of the first-level domain manager. This is the only change needed in the DOMREC definition. We did create an extra member in the EQQPARM data set and called it TPSWITCH. This member has the updated DOMREC definitions to be used when we have a long-term switch. In the EQQPARM data set, we have three members: TPSWITCH (F101 is domain manager), TPNORM (F100 is domain manager), and TPDOMAIN (the member used by TWSCE2E and the plan programs). Before the plan programs are executed, we replace the TPDOMAIN member with the TPSWITCH member. When F100 is going to be the domain manager again we simply replace the TPDOMAIN member with the TPNORM member. 316 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 333. Tip: If you let your system automation (for example, System Automation/390) handle the switch to the backup domain manager, you can automate the entire process. System automation replaces the EQQPARM members. System automation initiates the switch manager command remotely on the fault-tolerant workstation. System automation resets the definitions when the original domain manager is ready be activated. Step 4. Run replan plan program in Tivoli Workload Scheduler for z/OS We submit a replan plan program (job) using option 3.1 from legacy ISPF in the Tivoli Workload Scheduler for z/OS engine and verify the output. Example 5-16 shows the messages in EQQMLOG. Example 5-16 EQQMLOG EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPDOMAIN IS: 0000 EQQ3005I CPU F101 IS SET AS DOMAIN MANAGER OF FIRST LEVEL EQQ3030I DOMAIN MANAGER F101 MUST HAVE SERVER ATTRIBUTE SET TO BLANK EQQ3011I CPU F200 SET AS DOMAIN MANAGER EQQZ013I NOW PROCESSING PARAMETER LIBRARY MEMBER TPUSER EQQZ014I MAXIMUM RETURN CODE FOR PARAMETER MEMBER TPUSER IS: 0000 The F101 fault-tolerant workstation is the first-level domain manager. The EQQ3030I message is due to the CPUSERVER(1) specification in the CPUREC definition for the F101 workstation. The CPUSERVER(1) specification is used when F101 is running as a fault-tolerant workstation managed by the F100 domain manager. Step 5. Verify that the switched F101 is still domain manager Finally, we verify that F101 is the domain manager after the replan program has finished and the Symphony file is distributed (Figure 5-17). Chapter 5. End-to-end implementation scenarios and examples 317
  • 334. Figure 5-17 Workstations status after Tivoli Workload Scheduler for z/OS replan program From Figure 5-17, it can be verified that F101 is still MANAGER (in the CPU Type column) for the DM100 domain (in the Domain column). The CPU type for F100 is FTA. The OPCMASTER workstation has the status unlinked (the Link Status column). This status is correct, as we are using the JSC instance pointing to the F100 workstation. The OPCMASTER has a linked status on F101, as expected. Note: To reactivate F100 as a domain manager, simply do a switch manager to F100 or Symphony redistribute. The F100 will also be reinstated as domain manager when you run the extend or replan programs. Remember to change the DOMREC definitions before the plan programs are executed or the Symphony file will be redistributed. 5.5.5 Implementing Tivoli Workload Scheduler high availability on high availability environments You can also use high availability environments such as High Availability Cluster Multi-Processing (HACMP) or Microsoft Cluster (MSCS) to implement fail-safe Tivoli Workload Scheduler workstations. The redbook High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli Framework, SG24-6632, discusses these scenarios in detail, so we refer you to it for implementing Tivoli Workload Scheduler high availability using HACMP or MSCS. 5.6 Backup and maintenance guidelines for FTAs In this section, we discuss some important backup and maintenance guidelines for Tivoli Workload Scheduler fault-tolerant agents (workstations) in an end-to-end scheduling environment. 318 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 335. 5.6.1 Backup of the Tivoli Workload Scheduler FTAs To make sure that you can recover from disk or system failures on the system where the Tivoli Workload Scheduler engine is installed, you should make a daily or weekly backup of the installed engine. The backup can be done in several ways. You probably already have some backup policies and routines implemented for the system where the Tivoli Workload Scheduler engine is installed. These backups should be extended to make a backup of files in the <TWShome> and the <TWShome/..> directories. We suggest that you have a backup of all of the Tivoli Workload Scheduler files in the <TWShome> and <TWShome/..> directories. If the Tivoli Workload Scheduler engine is running as a fault-tolerant workstation in an end-to-end network, it should be sufficient to make the backup on a weekly basis. When deciding how often a backup should be generated, consider: Are you using parameters on the Tivoli Workload Scheduler agent? If you are using parameters locally on the Tivoli Workload Scheduler agent and do not have a central repository for the parameters, you should consider making daily backups. Are you using specific security definitions on the Tivoli Workload Scheduler agent? If you are using specific security file definitions locally on the Tivoli Workload Scheduler agent and do not have a central repository for the security file definitions, you should consider making daily backups. Another approach is to make a backup of the Tivoli Workload Scheduler agent files, at least before making any changes to the files. For example, the changes can be updates to configuration parameters or a patch update of the Tivoli Workload Scheduler agent. 5.6.2 Stdlist files on Tivoli Workload Scheduler FTAs Tivoli Workload Scheduler fault-tolerant agents save job logs on the system where the jobs run. These job logs are stored in a directory named <twshome>/stdlist. In the stdlist (standard list) directory, there will be subdirectories with the name ccyy.mm.dd (where cc is the century, yy is the year, mm is the month, and dd is the date). This subdirectory is created daily by the Tivoli Workload Scheduler netman process when a new Symphony file (Sinfonia) is received on the fault-tolerant agent. The Symphony file is generated by the Tivoli Workload Scheduler for z/OS controller plan program in the end-to-end scheduling environment. Chapter 5. End-to-end implementation scenarios and examples 319
  • 336. The ccyy.mm.dd subdirectory contains a job log for each job that is executed on a particular production day, as seen in Example 5-17. Example 5-17 Files in a stdlist/ccyy.mm.dd directory O19502.0908 File with job log for job with process no. 19502 run at 09.08 O19538.1052 File with job log for job with process no. 19538 run at 10.52 O38380.1201 File with job log for job with process no. 38380 run at 12.01 These log files are created by the Tivoli Workload Scheduler job manager process (jobman) and will remain there until deleted by the system administrator. Tivoli Workload Scheduler also logs messages from its own programs. These messages are stored in a subdirectory of the stdlist directory called logs. The easiest way to maintain the growth of these directories is to decide how long the log files are needed and schedule a job under Tivoli Workload Scheduler for z/OS control, which removes any file older than the given number of days. The Tivoli Workload Scheduler rmstdlist command can perform the deletion of stdlist files and remove or display files in the stdlist directory based on age of the files: rmstdlist [-v |-u] rmstdlist [-p] [age] In these commands, the arguments are: -v Displays the command version and exits. -u Displays the command usage information and exits. -p Displays the names qualifying standard list file directories. No directory or files are removed. If you do not specify -p, the qualifying standard list files are removed. age The minimum age, in days, for standard list file directories to be displayed or removed. The default is 10 days. We suggest that you run the rmstdlist command daily on all of your fault-tolerant agents. This command can be defined in a job in a job stream and scheduled by Tivoli Workload Scheduler for z/OS. You may need to save a backup copy of the stdlist files, for example, for internal revision or due to company policies. If this is the case, a backup job can be scheduled to run just before the rmstdlist job. 320 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 337. 5.6.3 Auditing log files on Tivoli Workload Scheduler FTAs The auditing function can be used to track changes to the Tivoli Workload Scheduler plan (the Symphony file) on FTAs. Plan auditing is enabled by the TOPLOGY PLANAUDITLEVEL parameter, described below. PLANAUDITLEVEL(0|1) Enables or disables plan auditing for distributed agents. Valid values are 0 to disable plan auditing and 1 to activate plan auditing. Auditing information is logged to a flat file in the TWShome/audit/plan directory. Each Tivoli Workload Scheduler workstation maintains its own log. Only actions are logged in the auditing file, not the success or failure of any action. If you change the value, you also need to restart the Tivoli Workload Scheduler for z/OS server and renew the Symphony file. After plan auditing has been enabled, modification to the Tivoli Workload Scheduler plan (the Symphony) on an FTA will be added to the plan directory on that workstation: <TWShome>/audit/plan/date (where date is in ccyymmdd format) We suggest that you clean out the audit database and plan directories regularly, daily if necessary. The cleanout in the directories can be defined in a job in a job stream and scheduled by Tivoli Workload Scheduler for z/OS. You may need to save a backup copy of the audit files (for internal revision or due to company policies, for example). If so, a backup job can be scheduled to run just before the cleanup job. 5.6.4 Monitoring file systems on Tivoli Workload Scheduler FTAs It is easier to deal with file system problems before they happen. If your file system fills up, Tivoli Workload Scheduler will no longer function and your job processing will stop. To avoid problems, monitor the file systems containing your Tivoli Workload Scheduler home directory and /tmp. For example, if you have a 2 GB file system, you might want a warning at 80%, but if you have a smaller file system, you will need a warning when a lower percentage fills up. We cannot give you an exact percentage at which to be warned. This depends on many variables that change from installation to installation (or company to company). Monitoring or testing for the percentage of the file system can be done by, for example, IBM Tivoli Monitoring and IBM Tivoli Enterprise Console® (TEC). Example 5-18 shows an example of a shell script that tests for the percentage of the Tivoli Workload Scheduler file system filled and report back if it is over 80%. Chapter 5. End-to-end implementation scenarios and examples 321
  • 338. Example 5-18 Monitoring script /usr/bin/df -P /dev/lv01 | grep TWS >> tmp1$$ /usr/bin/awk '{print $5}' tmp1$$ > tmp2$$ /usr/bin/sed 's/%$//g' tmp2$$ > tmp3$$ x=`cat tmp3$$` i=`expr $x > 80` echo "This file system is less than 80% full." >> tmp4$$ if [ "$i" -eq 1 ]; then echo "This file system is over 80% full. You need to remove schedule logs and audit logs from the subdirectories in the file system." > tmp4$$ fi cat tmp4$$ rm tmp1$$ tmp2$$ tmp3$$ tmp4$$ 5.6.5 Central repositories for important Tivoli Workload Scheduler files Tivoli Workload Scheduler has several files that are important for use of Tivoli Workload Scheduler and for the daily Tivoli Workload Scheduler production workload if you are running a Tivoli Workload Scheduler master domain manager or a Tivoli Workload Scheduler for z/OS end-to-end server. Managing these files across several Tivoli Workload Scheduler workstations can be a cumbersome and very time-consuming task. Using central repositories for these files can save time and make your management more effective. Script files Scripts (or the JCL) are very important objects when doing job scheduling on the Tivoli Workload Scheduler fault-tolerant agents. It is the scripts that actually perform the work or the job on the agent system, such as updating the payroll database or the customer inventory database. The job defintion for distributed jobs in Tivoli Workload Scheduler or Tivoli Workload Scheduler for z/OS contains a pointer (the path or directory) to the script. The script by itself is placed locally on the fault-tolerant agent. Because the fault-tolerant agents have a local copy of the plan (Symphony) and the script to run, they can continue running jobs on the system even if the connection to the Tivoli Workload Scheduler master or the Tivoli Workload Scheduler for z/OS controller is broken. This way we have fault tolerance on the workstations. Managing scripts on several Tivoli Workload Scheduler fault-tolerant agents and making sure that you always have the correct versions on every fault-tolerant agent can be a time-consuming task. You also must ensure that the scripts are protected so that they cannot be updated by the wrong person. Pure protected scripts can cause problems in your production environment if someone has 322 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 339. changed something without notifying the responsible planner or change manager. We suggest placing all scripts that are used for production workload in one common script repository. The repository can be designed in different ways. One way could be to have a subdirectory for each fault-tolerant workstation (with the same name as the name on the Tivoli Workload Scheduler workstation). All changes to scripts are made in this production repository. On a daily basis, for example, just before the plan is extended, the master scripts in the central repository are distributed to the fault-tolerant agents. The daily distribution can be handled by a Tivoli Workload Scheduler scheduled job. This job can be defined as predecessor to the plan extend job. This approach can be made even more advanced by using a software distribution application to handle the distribution of the scripts. The software distribution application can help keep track of different versions of the same script. If you encounter a problem with a changed script in a production shift, you can simply ask the software distribution application to redistribute a previous version of the same script and then rerun the job. Security files The Tivoli Workload Scheduler security file, discussed in detail 5.7, “Security on fault-tolerant agents” on page 323, is used to protect access to Tivoli Workload Scheduler database and plan objects. On every Tivoli Workload Scheduler engine (such as domain manager and fault-tolerant agent), you can issue conman commands for the plan and composer commands for the database. Tivoli Workload Scheduler security files are used to ensure that the right people have the right access to objects in Tivoli Workload Scheduler. Security files can be created or modified on every local Tivoli Workload Scheduler workstation, and they can be different from workstation to workstation. We suggest having a common security strategy for all Tivoli Workload Scheduler workstations in your IBM Tivoli Workload Scheduler network (and end-to-end network). This way, the security file can be placed centrally and changes are made only in the central security file. If the security file has been changed, it is simply distributed to all Tivoli Workload Scheduler workstations in your IBM Tivoli Workload Scheduler network. 5.7 Security on fault-tolerant agents In this section, we offer an overview of how security is implemented on Tivoli Workload Scheduler fault-tolerant agents (including domain managers). For Chapter 5. End-to-end implementation scenarios and examples 323
  • 340. more details, see the IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273. Figure 5-18 shows the security model on Tivoli Workload Scheduler fault-tolerant agents. When a user attempts to display a list of defined jobs, submit a new job stream, add a new resource, or any other operation related to the Tivoli Workload Scheduler plan or databases, Tivoli Workload Scheduler performs a check to verify that the user is authorized to perform that action. Security Model TWS and root users: TWS and root users: Has full access to all areas 1 Operations Group: Operations Group: Has full access to all areas Can manage the whole Can manage the whole Applications Manager: Applications Manager: workload but cannot workload but cannot Application User: Application User: 3 Can document jobs and Can document jobs and create job streams create job streams Can document own Can document own schedules for entire group schedules for entire group Has no root access Has no root access jobs and schedules jobs and schedules and manage some production and manage some production 4 5 General User: General User: Has display access only Has display access only 2 Security File USER Root 1 USER AppManager 3 CPU=@+LOGON=maestro,TWS,root,Root_mars-region CPU=@+LOGON=appmgr,AppMgrs BEGIN BEGIN JOB CPU=@ ACCESS=@ ... ... END END USER Application 4 USER Operations 2 CPU=@+LOGON=apps,Application CPU=@+LOGON=op,Operator BEGIN BEGIN ... JOB CPU=@ ACCESS=DISPLAY,SUBMIT,KILL,CANCEL END ... END USER User 5 CPU=@+LOGON=users,Users BEGIN ... END Figure 5-18 Sample security setup Tivoli Workload Scheduler users have different roles within the organization. The Tivoli Workload Scheduler security model you implement should reflect these roles. You can think of the different groups of users as nested boxes, as in Figure 5-18 on page 324. The largest box represents the highest access, granted to only the Tivoli Workload Scheduler user and the root user. The smaller boxes represent more restricted roles, with correspondingly restricted access. Each 324 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 341. group that is represented by a box in the figure would have a corresponding stanza in the security file. Tivoli Workload Scheduler programs and commands read the security file to determine whether the user has the access that is required to perform an action. 5.7.1 The security file Each workstation in a Tivoli Workload Scheduler network has its own security file. These files can be maintained independently on each workstation, or you can keep a single centralized security file on the master and copy it periodically to the other workstations in the network. At installation time, a default security file is created that allows unrestricted access to only the Tivoli Workload Scheduler user (and, on UNIX workstations, the root user). If the security file is accidentally deleted, the root user can generate a new one. If you have one security file for a network of agents, you may wish to make a distinction between the root user on a fault-tolerant agent and the root user on the master domain manager. For example, you can restrict local users to performing operations that affect only the local workstation, while permitting the master root user to perform operations that affect any workstation in the network. A template file named TWShome/config/Security is provided with the software. During installation, a copy of the template is installed as TWShome/Security, and a compiled copy is installed as TWShome/../unison/Security. Security file stanzas The security file is divided into one or more stanzas. Each stanza limits access at three different levels: User attributes appear between the USER and BEGIN statements and determine whether a stanza applies to the user attempting to perform an action. Object attributes are listed, one object per line, between the BEGIN and END statements. Object attributes determine whether an object line in the stanza matches the object the user is attempting to access. Access rights appear to the right of each object listed, after the ACCESS statement. Access rights are the specific actions that the user is allowed to take on the object. Chapter 5. End-to-end implementation scenarios and examples 325
  • 342. Important: Because only a subset of conman commands is available on FTAs in an end-to-end environment, some of the ACCESS rights that would be applicable in an ordinary non-end-to-end IBM Tivoli Workload Scheduler network will not be applicable in an end-to-end network. The steps of a security check The steps of a security check reflect the three levels listed above: 1. Identify the user who is attempting to perform an action. 2. Determine the type of object being accessed. 3. Determine whether the requested access should be granted to that object. Step 1: Identify the user When a user attempts to perform any Tivoli Workload Scheduler action, the security file is searched from top to bottom to find a stanza whose user attributes match the user attempting to perform the action. If no match is found in the first stanza, the user attributes of the next stanza are searched. If a stanza is found whose user attributes match that user, that stanza is selected for the next part of the security check. If no stanza in the security file has user attributes that match the user, access is denied. Step 2: Determine the type of object being accessed After the user has been identified, the stanza that applies to that user is searched, top-down, for an object attribute that matches the type of object the user is trying to access. Only that particular stanza (between the BEGIN and END statements) is searched for a matching object attribute. If no matching object attribute is found, access is denied. Step 3: Determine whether access is granted to that object If an object attribute is located that corresponds to the object that the user is attempting to access, the access rights following the ACCESS statement on that line in the file are searched for the action that the user is attempting to perform. If this access right is found, then access is granted. If the access right is not found on this line, then the rest of the stanza is searched for other object attributes (other lines) of the same type, and this step is repeated for each of these. Figure 5-19 on page 327 illustrates the steps of the security check algorithm. 326 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 343. Master Sol Security Logon ID: johns FTA FTA Venus Mars Command issued: conman 'release mars#weekly.cleanup' 1) Find user Security file on Sol USER JohnSmith CPU=@+LOGON=johns BEGIN 2) Find object JOB CPU=@ NAME=C@ ACCESS=DISPLAY,RELEASE,ADD,… JOB CPU=@ NAME=@ ACCESS=DISPLAY 3) Find access right SCHEDULE CPU=@ ACCESS=DISPLAY,CANCEL,ADD,… RESOURCE CPU=@ ACCESS=DISPLAY,MODIFY,ADD,… PROMPT ACCESS=DISPLAY,ADD,REPLY,… CALENDAR ACCESS=DISPLAY CPU ACCESS=DISPLAY END Figure 5-19 Example of a Tivoli Workload Scheduler security check 5.7.2 Sample security file Here are some things to note about the security file stanza (Example 5-19 on page 328): mastersm is an arbitrarily chosen name for this group of users. The example security stanza above would match a user that logs on to the master (or to the Framework via JSC), where the user name (or TMF Administrator name) is maestro, root, or Root_london-region. These users have full access to jobs, jobs streams, resources, prompts, files, calendars, and workstations. The users have full access to all parameters except those whose names begin with r (parameter name=@ ~ name=r@ access=@). Chapter 5. End-to-end implementation scenarios and examples 327
  • 344. For NT user definitions (userobj), the users have full access to objects on all workstations in the network. Example 5-19 Sample security file ########################################################### #Sample Security File ########################################################### #(1)APPLIES TO MAESTRO OR ROOT USERS LOGGED IN ON THE #MASTER DOMAIN MANAGER OR FRAMEWORK. user mastersm cpu=$master,$framework +logon=maestro,root,Root_london-region begin #OBJECT ATTRIBUTES ACCESS CAPABILITIES #-------------------------------------------- job access=@ schedule access=@ resource access=@ prompt access=@ file access=@ calendar access=@ cpu access=@ parameter name=@ ~ name=r@ access=@ userobj cpu=@ + logon=@ access=@ end Creating the security file To create user definitions, edit the template file TWShome/Security. Do not modify the original template in TWShome/config/Security. Then use the makesec command to compile and install a new operational security file. After it is installed, you can make further modifications by creating an editable copy of the operational file with the dumpsec command. The dumpsec command The dumpsec command takes the security file, generates a text version of it, and sends that to stdout. The user must have display access to the security file. Synopsis: dumpsec -v | -u dumpsec > security-file Description: If no arguments are specified, the operational security file (../unison/Security) is dumped. To create an editable copy of a security file, redirect the output of the command to another file, as shown in “Example of dumpsec and makesec” on page 330. 328 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 345. Arguments – -v displays command version information only. – -u displays command usage information only. – security-file specifies the name of the security file to dump. Figure 5-20 The dumpsec command The makesec command The makesec command essentially does the opposite of what the dumpsec command does. The makesec command takes a text security file, checks its syntax, compiles it into a binary security file, and installs the new binary file as the active security file. Changes to the security file take effect when Tivoli Workload Scheduler is stopped and restarted. Affected programs are: Conman Composer Tivoli Workload Scheduler connectors Simply exit the programs. The next time they are run, the new security definitions will be recognized. Tivoli Workload Scheduler connectors must be stopped using the wmaeutil command before changes to the security file will take effect for users of JSC. The connectors will automatically restart as needed. The user must have modify access to the security file. Note: On Windows NT, the connector processes must be stopped (using the wmaeutil command) before the makesec command will work correctly. Synopsis: makesec -v | -u makesec [-verify] in-file Chapter 5. End-to-end implementation scenarios and examples 329
  • 346. Description: The makesec command compiles the specified file and installs it as the operational security file (../unison/Security). If the -verify argument is specified, the file is checked for correct syntax, but it is not compiled and installed. Arguments: – -v displays command version information only. – -u displays command usage information only. – -verify checks the syntax of the user definitions in the in-file only. The file is not installed as the security file. (Syntax checking is performed automatically when the security file is installed.) – in-file specifies the name of a file or set of files containing user definitions. A file name expansion pattern is permitted. Example of dumpsec and makesec Example 5-20 creates an editable copy of the active security file in a file named Security.conf, modifies the user definitions with a text editor, then compiles Security.conf and replaces the active security file. Example 5-20 Using dumpsec and makesec dumpsec > Security.conf vi Security.conf (Here you would make any required modifications to the Security.conf file) makesec Security.conf Note: Add the Tivoli Administrator to the Tivoli Workload Scheduler security file after you have installed the Tivoli Management Framework and Tivoli Workload Scheduler connector. Configuring Tivoli Workload Scheduler security for the Tivoli Administrator In order to use the Job Scheduling Console on a master or on an FTA, the Tivoli Administrator user (or users) must be defined in the security file of that master or FTA. The $framework variable can be used as a user attribute in place of a specific workstation. This indicates a user logging in via the Job Scheduling Console. 330 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 347. 5.8 End-to-end scheduling tips and tricks In this section, we provide some tips, tricks, and troubleshooting suggestions for the end-to-end scheduling environment. 5.8.1 File dependencies in the end-to-end environment Use the filewatch.sh program that is delivered with Tivoli Workload Scheduler. Description of usage and parameters is first in the filewatch.sh program. In an ordinary (non-end-to-end) IBM Tivoli Workload Scheduler network — one in which the MDM is a UNIX or Windows workstation — it is possible to create a file dependency on a job or job stream; this is not possible in an end-to-end network because the controlling system is Tivoli Workload Scheduler for z/OS. It is very common to use files as triggers or predecessors to job flows in distributed environment. Tivoli Workload Scheduler 8.2 includes TWSHOME/bin/filewatch.sh, a sample script that can be used to check for the existence of files. You can configure the script to check periodically for the file, just as with a real Tivoli Workload Scheduler file dependency. By defining a job that runs filewatch.sh, you can implement a file dependency. To learn more about filewatch and how to use it, read the detailed description in the comments at the top of the filewatch.sh script. The options of the script are: -kd (mandatory) The options to pass to the test command. See the man page for “test” for a list of allowed values. -fl (mandatory) Path name of the file (or directory) to look for. -dl (mandatory) The deadline period (in seconds); cannot be used together with -nd. -nd (mandatory) Suppress the deadline; cannot be used together with -dl. -int (mandatory) The search interval period (in seconds). -rc (optional) The return code that the script will exit with if the deadline is reached without finding the file (ignored if -nd is used). -tsk (optional) The path of the task launched if the file is found. Here are two filewatch examples: In this example, the script checks for file /tmp/filew01 every 15 seconds indefinitely: JOBSCR(‘/tws/bin/filewatch.sh -kd f -fl /tmp/filew01 -int 15 -nd') Chapter 5. End-to-end implementation scenarios and examples 331
  • 348. In this example, the script checks for file /tmp/filew02 every 15 seconds for 60 seconds. If file is not there 60 seconds after the check has started, the script will end with return code 12. JOBSCR('/tws/bin/filewatch.sh -kd f -fl /tmp/filew02 -int 15 -dl 60 -rc 12') Figure 5-21 shows how the filewatch script might be used as a predecessor to the job that will process or work with the file being “watched.” This way you can make sure that the file to be processed is there before running the job that will process the file. The job that processes a file can be dependent on a filewatch job that watches for the file. Filewatch log Figure 5-21 How to use filewatch.sh to set up a file dependency 5.8.2 Handling offline or unlinked workstations Tip: If the workstation does not link as it should, the cause can be that the writer process has not initiated correctly or the run number for the Symphony file on the fault-tolerant workstation is not the same as the run number on the master. If you mark the unlinked workstations and right-click, a pop-up menu opens as shown in Figure 5-22 on page 333. Click Link to try to link the workstation. 332 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 349. Figure 5-22 Context menu for workstation linking You can check the Symphony run number and the Symphony status in the legacy ISPF using option 6.6. Tip: If the workstation is Not Available/Offline, the cause might be that the mailman, batchman, and jobman processes are not started on the fault-tolerant workstation. You can right-click the workstation to open the context menu shown in Figure 5-22, then click Set Status. This opens a new window (Figure 5-23), in which you can try to activate the workstation by clicking Active. This action attempts to start the mailman, batchman, and jobman processes on the fault-tolerant workstation by issuing a conman start command on the agent. Figure 5-23 Pop-up window to set status of workstation Chapter 5. End-to-end implementation scenarios and examples 333
  • 350. 5.8.3 Using dummy jobs Because it is not possible to add dependency to the job stream level in Tivoli Workload Scheduler for z/OS (as it is in the IBM Tivoli Workload Scheduler distributed product), dummy start and dummy end general jobs are a workaround for this Tivoli Workload Scheduler for z/OS limitation. When using dummy start and dummy end general jobs, you can always uniquely identify the start point and the end point for the jobs in the job stream. 5.8.4 Placing job scripts in the same directories on FTAs The SCRPTLIB members can be reused in several job streams and on different fault-tolerant workstations of the same type (such as UNIX or Windows). For example, if a job (script) is scheduled on all of your UNIX systems, you can create one SCRTPLIB member for this job and define it in several job streams on the associated fault-tolerant workstations, though this requires that the script is placed in the same directory on all of your systems. This is another good reason to have all job scripts placed in the same directories across your systems. 5.8.5 Common errors for jobs on fault-tolerant workstations This sections discusses two of the most common errors for jobs on fault-tolerant workstations. Handling errors in script definitions When adding a job stream to the current plan in Tivoli Workload Scheduler for z/OS (using JSC or option 5.1 from legacy ISPF), you may see this error message: EQQM071E A JOB definition referenced by this occurrence is wrong This shows that there is an error in the definition for one or more jobs in the job stream and that the job stream is not added to the current plan. If you look in the EQQMLOG for the Tivoli Workload Scheduler for z/OS engine, you will find messages similar to Example 5-21. Example 5-21 EQQMLOG messages EQQM992E WRONG JOB DEFINITION FOR THE FOLLOWING OCCURRENCE: EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED In our example, the F100J011 member in EQQSCLIB looks like: JOBRC JOBSCR('/tivoli/TWS/scripts/japjob1') JOBUSR(tws-e) 334 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 351. Note the typo error: JOBRC should be JOBREC. The solution to this problem is simply to correct the error and try to add the job stream again. The job stream must be added to the Tivoli Workload Scheduler for z/OS plan again, because the job stream was not added the first time (due to the typo). Note: You will get similar error messages in the EQQMLOG for the plan programs if the job stream is added during plan extension. The error messages that are issued by the plan program are: EQQZ068E JOBRC IS AN UNKNOWN COMMAND AND WILL NOT BE PROCESSED EQQZ068I FURTHER STATEMENT PROCESSING IS STOPPED EQQ3077W BAD MEMBER F100J011 CONTENTS IN EQQSCLIB Note that the plan extension program will end with return code 0. If an FTA job is defined in Tivoli Workload Scheduler for z/OS but the corresponding JOBREC is missing, the job will be added to the Symphony file but it will be set to priority 0 and state FAIL. This combination of priority and state is not likely to occur normally, so if you see a job like this, you can assume that the problem is that the JOBREC was missing when the Symphony file was built. Another common error is a misspelled name for the script or the user (in the JOBREC, JOBSCR, or JOBUSR definition) in the FTW job. Say we have the JOBREC definition in Example 5-22. Example 5-22 Typo in JOBREC /* Definiton for F100J010 job to be executed on F100 machine */ /* */ JOBREC JOBSCR('/tivoli/TWS/scripts/jabjob1') JOBUSR(tws-e) Here the typo error is in the name of the script. It should be japjob1 instead of jabjob1. This typo will result in an error with the error code FAIL when the job is run. The error will not be caught by the plan programs or when you add the job stream to the plan in Tivoli Workload Scheduler for z/OS. It is easy to correct this error using the following steps: 1. Correct the typo in the member in the SCRPTLIB. 2. Add the same job stream again to the plan in Tivoli Workload Scheduler for z/OS. This way of handling typo errors in the JOBREC definitions is actually the same as if you performed a rerun from on a Tivoli Workload Scheduler master. The job stream must be re-added to the Tivoli Workload Scheduler for z/OS plan to have Chapter 5. End-to-end implementation scenarios and examples 335
  • 352. Tivoli Workload Scheduler for z/OS send the new JOBREC definition to the fault-tolerant workstation agent. Remember, when doing extend or replan of the Tivoli Workload Scheduler for z/OS plan, that the JOBREC definition is built into the Symphony file. By re-adding the job stream we ask Tivoli Workload Scheduler for z/OS to send the re-added job stream, including the new JOBREC definition, to the agent. Handling the wrong password definition for Windows FTW If you have defined the wrong password for a Windows user ID in the USRREC topology definition or if the password has been changed on the Windows machine, the FTW job will end with an error and the error code FAIL. To solve this problem, you have two options: Change the wrong USRREC definition and redistribute the Symphony file (using option 3.5 from legacy ISPF). This approach can be disruptive if you are running a huge batch load on FTWs and are in the middle of a batch peak. Log into the first-level domain manager (the domain manager directly connected to the Tivoli Workload Scheduler for z/OS server; if there is more than one first-level domain manager, log on the first-level domain manager that is in the hierarchy of the FTW), then alter the password either using conman or using a JSC instance pointing to the first-level domain manager. When you have changed the password, simply rerun the job that was in error. The USRREC definition should still be corrected so it will take effect the next time the Symphony file is created. 5.8.6 Problems with port numbers There are two different parameters named PORTNUMBER, one in the SERVOPTS that is used for the JSC and OPC Connector, and one in the TOPOLOGY parameters that is used by the E2E Server to communicate with the distributed FTAs. The two PORTNUMBER parameters must have different values. The localopts for the FTA has a parameter named nm port, which is the port on which netman listens. The nm port value must match the CPUREC CPUTCPIP value for each FTA. There is no requirement that CPUTCPIP must match the TOPOLOGY PORTNUMBER. The value of the TOPOLOGY PORTNUMBER and the HOSTNAME value are imbedded in the Symphony file, which enables the FTA to know how to communicate back to OPCMASTER. The next sections illustrate different ways in which setting the values for PORTNUMBER and CPUTCPIP incorrectly can cause problems in the E2E environment. 336 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 353. CPUTCPIP not the same as NM PORT The value for CPUTCPIP in the CPUREC parameter for an FTA should always be set to the same port that the FTA has defined as nm port in localopts. We did some tests to see what errors occur if the wrong value is used for CPUTCPIP. In the first test, nm port for the domain manager (DM) HR82 was 31111 but CPUTCPIP was set to 31122, a value that was not used by any FTA on our network. The current plan (CP) was extended to distribute a Symphony file with the wrong CPUTCPIP in place. The DM failed to link and the messages in Example 5-23 were seen in the USS stdlist TWSMERGE log. Example 5-23 Excerpt from TWSMERGE log MAILMAN:+ AWSBCV082I Cpu HR82, Message: AWSDEB003I Writing socket: EDC8128I Connection refused. MAILMAN:+ AWSBCV035W WARNING: Linking to HR82 failed, will write to POBOX. Therefore, if the DM will not link and the messages shown above are seen in TWSMERGE, the nm port value should be checked and compared to the CPUTCPIP value. In this case, correcting the CPUTCPIP value and running a Symphony Renew job eliminated the problem. We did another test with the same DM, this time setting CPUTCPIP to 31113. Example 5-24 Setting CPUTCPIP to 31113 CPUREC CPUNAME(HR82) CPUTCPIP(31113) The TOPOLOGY PORTNUMBER was also set to 31113, its normal value: TOPOLOGY PORTNUMBER(31113) After cycling the E2E Server and running a CP EXTEND, the DM and all FTAs were LINKED and ACTIVE, which was not expected (Example 5-25). Example 5-25 Messages showing DM and all the FTAs are LINKED and ACTIVE EQQMWSLL -------- MODIFYING WORK STATIONS IN THE CURRENT PLAN Row 1 to 8 of 8 Enter the row command S to select a work station for modification, or I to browse system information for the destination. Row Work station L S T R Completed Active Remaining cmd name text oper dur. oper oper dur. ' HR82 PDM on HORRIBLE L A C A 4 0.00 0 13 0.05 ' OP82 MVS XAGENT on HORRIBLE L A C A 0 0.00 0 0 0.00 ' R3X1 SAP XAGENT on HORRIBLE L A C A 0 0.00 0 0 0 Chapter 5. End-to-end implementation scenarios and examples 337
  • 354. How could the DM be ACTIVE if the CPUTCPIP value was intentionally set to the wrong value? We found that there was an FTA on the network that was set up with nm port=31113. It was actually an MDM (master domain manager) for a Tivoli Workload Scheduler 8.1 distributed-only (not E2E) environment, so our Version 8.2 E2E environment connected to the Version 8.1 MDM as if it were HR82. This illustrates that extreme care must be taken to code the CPUTCPIP values correctly, especially if there are multiple Tivoli Workload Scheduler environments present (for example, a test system and a production system). The localopts nm ipvalidate parameter could be used to prevent the overwrite of the Symphony file due to incorrect parameters being set up. If the following is specified in localopts: nm ipvalidate=full The connection would not be allowed if IP validation fails. However, if SSL is active, the recommendation is to use the following localopts parameter: nm ipvalidate=none PORTNUMBER set to PORT reserved for another task We wanted to test the effect of setting the TOPOLOGY PORTNUMBER parameter to a port that is reserved for use by another task. The data set specified by the PROFILE DD statement in the TCPIP statement had the parameters in Example 5-26. Example 5-26 TOPOLOGY PORTNUMBER parameter PORT 3000 TCP CICSTCP ; CICS Socket After setting PORTNUMBER in TOPOLOGY to 3000 and running a CP EXTEND to create a new Symphony file, there were no obvious indications in the messages that there was a problem with the PORTNUMBER setting. However, the following messages appeared in the NETMAN log in USS stdlist/logs: NETMAN:Listening on 3000 timeout 10 started Sun Aug 1 21:01:57 2004 These messages then occurred repeatedly in the NETMAN log (Example 5-27). Example 5-27 Excerpt from the NETMAN log NETMAN:+ AWSEDW020E Error opening IPC: NETMAN:AWSDEB001I Getting a new socket: 7 338 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 355. If these messages are seen and the DM will not link, the following command can be issued to determine that the problem is a reserved TCP/IP port: TSO NETSTAT PORTLIST In Example 5-28, the output shows the values for the PORTNUMBER port (3000). Example 5-28 Excerpt from the NETMAN log EZZ2350I MVS TCP/IP NETSTAT CS V1R5 TCPIP Name: TCPIP EZZ2795I Port# Prot User Flags Range IP Address EZZ2796I ----- ---- ---- ----- ----- ---------- EZZ2797I 03000 TCP CICSTCP DA PORTNUMBER set to PORT already in use PORTNUMBER in TOPOLOGY was set to 424, which was already in use as the TCPIPPORT by the controller. Everything worked correctly, but when the E2E Server was shut down, the message in Example 5-29 occurred in the controller EQQMLOG every 10 minutes. Example 5-29 Excerpt from the controller EQQMLOG 08/01 18.48.49 EQQTT11E AN UNDEFINED TRACKER AT IP ADDRESS 9.48.204.143 ATTEMPTED TO CONNECT TO THE 08/01 18.48.49 EQQTT11I CONTROLLER. THE REQUEST IS NOT ACCEPTED EQQMA11E Cannot allocate connection EQQMA17E TCP/IP socket I/O error during Connect() call for "SocketImpl<Binding=/192.227.118.43,port=31111,localport=32799>", failed with error: 146=Connection refused When the E2E Server was up, it handled port 424. When the E2E Server was down, port 424 was handled by the controller task (which still had TCPIPPORT set to the default value of 424). Because there were some TCP/IP connected trackers defined on that system, message EQQTT11E was issued when the FTA IP addresses did not match the TCP/IP addresses in the ROUTOPTS parameter. TOPOLOGY PORTNUMBER set the same as SERVOPTS PORTNUMBER The PORTNUMBER in SERVOPTS is used for JSC and OPC Connector. If the TOPOLOGY PORTNUMBER is set to the same value as the SERVOPTS PORTNUMBER, E2E processing will still work, but errors will occur when starting the OPC Connector. We did a test with the parmlib member for the E2E Server containing the values shown in Example 5-30 on page 340. Chapter 5. End-to-end implementation scenarios and examples 339
  • 356. Example 5-30 TOPOLOGY and SERVOPTS PORTNUMBER are the same SERVOPTS SUBSYS(O82C) PROTOCOL(E2E,JSC) PORTNUMBER(446) TOPOLOGY PORTNUMBER(446) The OPC Connector got the error messages shown in Example 5-31 and the JSC would not function. Example 5-31 Error message for the OPC Connector GJS0005E Cannot load workstation list. Reason: EQQMA11E Cannot allocate connection EQQMA17E TCP/IP socket I/O error during Recv() call for "Socketlmpl<Binding= dns name/ip address,port=446,localport=4699>" failed with error" 10054=Connection reset by peer For the OPC connector and JSC to work again, it was necessary to change the TOPOLOGY PORTNUMBER to a different value (not equal to the SERVOPTS PORTNUMBER) and cycle the E2E Server task. Note that this problem could occur if the JSC and E2E PROTOCOL functions were implemented in separate tasks (one task E2E only, one task JSC only) if the two PORTNUMBER values were set to the same value. 5.8.7 Cannot switch to new Symphony file (EQQPT52E) messages The EQQPT52E message, with text as shown in Example 5-32, can be a difficult one for troubleshooting as there are several different possible causes. Example 5-32 EQQPT52E message EQQPT52E Cannot switch to the new symphony file: run numbers of Symphony (x) and CP (y) aren't matching The x and y in the example message would be replaced by the actual run number values. Sometimes the problem is resolved by running a Symphony Renew or CP REPLAN (or CP EXTEND) job. However, there are some other things to check if this does not correct the problem: The EQQPT52E message can be caused if new FTA workstations are added via the Tivoli Workload Scheduler for z/OS dialog, but the TOPOLOGY parms are not updated with the new CPUREC information. In this case, adding the TOPOLOGY information and running a CP batch job should resolve the problem. 340 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 357. EQQPT52E can also occur if there are problems with the ID used to run the CP batch job or the E2E Server task. One clue that a user ID problem is involved with the EQQPT52E message is if, after the CP batch job completes, there is still a file in the WRKDIR whose name is Sym plus the user ID that the CP batch job runs under. For example, if the CP EXTEND job runs under ID TWSRES9, the file in the WRKDIR would be named SymTWSRES9. If security were set up correctly, the SymTWSRES9 file would have been renamed to Symnew before the CP batch job ended. If the cause of the EQQPT52E still cannot be determined, add the DIAGNOSE statements in Example 5-33 to the parm file indicated. Example 5-33 DIAGNOSE statements added (1) CONTROLLER: DIAGNOSE NMMFLAGS('00003000') (2) BATCH (CP EXTEND): DIAGNOSE PLANMGRFLAGS('00040000') (3) SERVER : DIAGNOSE TPLGYFLAGS(X'181F0000') Then collect this list of documentation for analysis. Controller and server EQQMLOGs Output of the CP EXTEND (EQQDNTOP) job EQQTWSIN and EQQTWSOU files USS stdlist/logs directory (or a tar backup of the entire WRKDIR Chapter 5. End-to-end implementation scenarios and examples 341
  • 358. 342 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 359. A Appendix A. Connector reference In this appendix, we describe the commands related to the IBM Tivoli Workload Scheduler and IBM Tivoli Workload Scheduler for z/OS connectors. We also describe some Tivoli Management Framework commands related to the connectors. © Copyright IBM Corp. 2004 343
  • 360. Setting the Tivoli environment To use the commands described in this appendix, you must first set the Tivoli environment. To do this, log in as root or administrator, then enter one of the commands shown in Table A-1. Table A-1 Setting the Tivoli environment Shell Command to set the Tivoli environment sh or ksh . /etc/Tivoli/setup_env.sh csh source /etc/Tivoli/setup_env.csh DOS %SYSTEMROOT%system32driversetcTivolisetup_env.cmd (Windows) bash Authorization roles required To manage connector instances, you must be logged in as a Tivoli administrator with one or more of the roles listed in Table A-2. Table A-2 Authorization roles required for working with connector instances An administrator with this role... Can perform these actions user Use the instance, view instance settings admin, senior, or super Use the instance, view instance settings, create and remove instances, change instance settings, start and stop instances Note: To control access to the scheduler, the TCP/IP server associates each Tivoli administrator to a Remote Access Control Facility (RACF) user. For this reason, a Tivoli administrator should be defined for every RACF user. For additional information, refer to Tivoli Workload Scheduler V8R1 for z/OS Customization and Tuning, SH19-4544. Working with Tivoli Workload Scheduler for z/OS connector instances This section describes how to use the wopcconn command to create and manage Tivoli Workload Scheduler for z/OS connector instances. 344 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 361. Much of the following information is excerpted from the IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257. The wopcconn command Use the wopcconn command to create, remove, and manage Tivoli Workload Scheduler for z/OS connector instances. This program is downloaded when you install the connector. Table A-3 describes how to use wopcconn in the command line to manage connector instances. Note: Before you can run wopcconn, you must set the Tivoli environment. See “Setting the Tivoli environment” on page 344. Table A-3 Managing Tivoli Workload Scheduler for z/OS connector instances If you want to... Use this syntax Create an instance wopcconn -create [-h node] -e instance_name -a address -p port Stop an instance wopcconn -stop -e instance_name | -o object_id Start an instance wopcconn -start -e instance_name | -o object_id Restart an instance wopcconn -restart -e instance_name | -o object_id Remove an instance wopcconn -remove -e instance_name | -o object_id View the settings of an wopcconn -view -e instance_name | -o object_id instance Change the settings of an wopcconn -set -e instance_name | -o object_id [-n instance new_name] [-a address] [-p port] [-t trace_level] [-l trace_length] node is the name or the object ID (OID) of the managed node on which you are creating the instance. The TMR server name is the default. instance_name is the name of the instance. object_id is the object ID of the instance. new_name is the new name for the instance. address is the IP address or host name of the z/OS system where the Tivoli Workload Scheduler for z/OS subsystem to which you want to connect is installed. port is the port number of the OPC TCP/IP server to which the connector must connect. Appendix A. Connector reference 345
  • 362. trace_level is the trace detail level from 0 to 5. trace_length is the maximum length of the trace file.You can also use wopcconn in interactive mode. To do this, just enter the command, without arguments, in the command line. Example We used a z/OS system with the host name twscjsc. On this machine, a TCP/IP server connects to port 5000. Yarmouth is the name of the TMR managed node where we installed the OPC connector. We called this new connector instance twsc. With the following command, our instance has been created: wopcconn -create -h yarmouth -e twsc -a twscjsc -p 5000 You can also run the wopcconn command in interactive mode. To do this, perform the following steps: 1. At the command line, enter wopcconn with no arguments. 2. Select choice number 1 in the first menu. Example: A-1 Running wopcconn in interactive mode Name : TWSC Object id : 1234799117.5.38#OPC::Engine# Managed node : yarmouth Status : Active OPC version : 2.3.0 2. Name : TWSC 3. IP Address or Hostname: TWSCJSC 4. IP portnumber : 5000 5. Data Compression : Yes 6. Trace Length : 524288 7. Trace Level : 0 0. Exit Working with Tivoli Workload Scheduler connector instances This section describes how to use the wtwsconn.sh command to create and manage Tivoli Workload Scheduler connector instances. 346 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 363. For more information, refer to IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257. The wtwsconn.sh command Use the wtwsconn.sh utility to create, remove, and manage connector instances. This program is downloaded when you install the connector. Note: Before you can run wtwsconn.sh, you must set the Tivoli environment. See “Setting the Tivoli environment” on page 344. Table 5-2 How to manage Tivoli Workload Scheduler for z/OS connector instances If you want to... Use this syntax Create an instance wtwsconn.sh -create [-h node]-n instance_name -t twsdir Stop an instance wtwsconn.sh -stop -n instance | -t twsdir Remove an instance wtwsconn.sh -remove -n instance_name View the settings of an wtwsconn.sh -view -n instance_name instance Change the Tivoli Workload wtwsconn.sh -set -n instance_name -t twsdir Scheduler home directory of an instance node specifies the node where the instance is created. If not specified, it defaults to the node from which the script is run. instance is the name of the new instance. This name identifies the engine node in the Job Scheduling tree of the Job Scheduling Console. The name must be unique within the Tivoli Managed Region. twsdir specifies the home directory of the Tivoli Workload Scheduler engine that is associated with the connector instance. Example We used a Tivoli Workload Scheduler for z/OS with the host name twscjsc. On this machine, a TCP/IP server connects to port 5000. Yarmouth is the name of the TMR managed node where we installed the Tivoli Workload Scheduler connector. We called this new connector instance Yarmouth-A. With the following command, our instance has been created: wtwsconn.sh -create -h yarmouth -n Yarmouth-A -t /tivoli/TWS/ Appendix A. Connector reference 347
  • 364. Useful Tivoli Framework commands These commands can be used to check your Framework environment. Refer to the Tivoli Framework 3.7.1 Reference Manual, SC31-8434, for more details. wlookup -ar ProductInfo lists the products that are installed on the Tivoli server. wlookup -ar PatchInfo lists the patches that are installed on the Tivoli server. wlookup -ar MaestroEngine lists the instances of this class type (same for the other classes). For example: barb 1318267480.2.19#Maestro::Engine# The number before the first period (.) is the region number and the second number is the managed node ID (1 is the Tivoli server). In a multi-Tivoli environment, you can determine where a particular instance is installed by looking at this number because all Tivoli regions have a unique ID. wuninst -list lists all products that can be un-installed. wuninst {ProductName}-list lists the managed nodes where a product is installed. wmaeutil Maestro -Version lists the versions of the installed engine, database, and plan. wmaeutil Maestro -dbinfo lists information about the database and the plan. wmaeutil Maestro -gethome lists the installation directory of the connector. 348 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 365. Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook. IBM Redbooks For information on ordering these publications, see “How to get IBM Redbooks” on page 350. Note that some of the documents referenced here may be available in softcopy only. End-to-End Scheduling with OPC and TWS Mainframe and Distributed Environment, SG24-6013 End-to-End Scheduling with Tivoli Workload Scheduler 8.1, SG24-6022 High Availability Scenarios with IBM Tivoli Workload Scheduler and IBM Tivoli Framework, SG24-6632 IBM Tivoli Workload Scheduler Version 8.2: New Features and Best Practices, SG24-6628 Implementing TWS Extended Agent for Tivoli Storage Manager, GC24-6030 TCP/IP in a Sysplex, SG24-5235 Other publications These publications are also relevant as further information sources: IBM Tivoli Management Framework 4.1 User’s Guide, GC32-0805 IBM Tivoli Workload Scheduler Job Scheduling Console Release Notes, Feature level 1.3, SC32-1258 IBM Tivoli Workload Scheduler Job Scheduling Console User’s Guide, Feature Level 1.3, SC32-1257 IBM Tivoli Workload Scheduler Planning and Installation Guide, SC32-1273 IBM Tivoli Workload Scheduler Reference Guide, SC32-1274 IBM Tivoli Workload Scheduler Release Notes Version 8.2 (Maintenance Release April 2004) SC32-1277 IBM Tivoli Workload Scheduler for z/OS Customization and Tuning, SC32-1265 © Copyright IBM Corp. 2004. All rights reserved. 349
  • 366. IBM Tivoli Workload Scheduler for z/OS Installation, SC32-1264 IBM Tivoli Workload Scheduler for z/OS Managing the Workload, SC32-1263 IBM Tivoli Workload Scheduler for z/OS Messages and Codes, Version 8.2 (Maintenance Release April 2004), SC32-1267 IBM Tivoli Workload Scheduling Suite General Information Version 8.2, SC32-1256 OS/390 V2R10.0 System SSL Programming Guide and Reference, SC23-3978 Tivoli Workload Scheduler for z/OS Installation Guide, SH19-4543 z/OS V1R2 Communications Server: IP Configuration Guide, SC31-8775 Online resources These Web sites and URLs are also relevant as further information sources: IBM Tivoli Workload Scheduler publications in PDF format http://publib.boulder.ibm.com/tividd/td/WorkloadScheduler8.2.html Search for IBM fix packs http://www.ibm.com/support/us/all_download_drivers.html Adobe (Acrobat) Reader http://www.adobe.com/products/acrobat/readstep2.html How to get IBM Redbooks You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications, and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.com/services 350 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 367. Abbreviations and acronyms ACF Advanced Communications PDS Partitioned data set Function PID Process ID API Application Programming PIF Program interface Interface PSP Preventive service planning ARM Automatic Restart Manager PTF Program temporary fix COBRA Common Object Request Broker Architecture RACF Resource Access Control Facility CP Control point RFC Remote Function Call DM Domain manager RODM Resource Object Data DVIPA Dynamic virtual IP address Manager EM Event Manager RTM Recovery and Terminating FTA Fault-tolerant agent Manager FTW Fault-tolerant workstation SCP Symphony Current Plan GID Group Identification Definition SMF System Management Facility GS General Service SMP System Modification Program GUI Graphical user interface SMP/E System Modification HFS Hierarchical File System Program/Extended IBM International Business STLIST Standard list Machines Corporation TMF Tivoli Management ISPF Interactive System Framework Productivity Facility TMR Tivoli Management Region ITSO International Technical TSO Time-sharing option Support Organization TWS IBM Tivoli Workload ITWS IBM Tivoli Workload Scheduler Scheduler TWSz IBM Tivoli Workload JCL Job control language Scheduler for z/OS JES Job Execution Subsystem USS UNIX System Services JSC Job Scheduling Console VIPA Virtual IP address JSS Job Scheduling Services VTAM Virtual Telecommunications MN Managed nodes Access Method NNM Normal Mode Manager WA Workstation Analyzer OMG Object Management Group WLM Workload Monitor OPC Operations, planning, and X-agent Extended agent control XCF Cross-system coupling facility © Copyright IBM Corp. 2004. All rights reserved. 351
  • 368. 352 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 369. Index busiest server 55 Symbols business processing cycles 36 $framework 330 Numerics C CA7 55 24/7 availability 2 caching mechanism 14 8.2-TWS-FP04 207 CALENDAR() 248 calendars 35 A catalog a dataset 45 Access rights 325 cataloged procedures 45 ACF/VTAM connections 29 central repository 322 active engine 29 centralized dependency 14 active TCP/IP stack 131 centralized script ad hoc prompts 88 overview 73 alter long-term plan 38 centralized script definition rules 11 APAR list 119 Centralized Script Library Management 10 APAR PQ80341 123 Centralized Scripting 10 APPC communication 173 centralized scripts 11 APPC ROUTOPTS 117 certificates 21 APPC server 31 changes to your production workload 37 APPL 18 classical tracker agent environment 280 application 33 CLIST 47 AS/400 job 2 CODEPAGE 180 audit 252 common interface 6 audit trail 49 communication layer 155 auditing file 321 communications between workstations 51 automate operator activities 5 Comparison expression 19 automatic job tailoring 37 component group 210 Automatic Recovery 11, 33, 221 Composer 329 Automatic Restart Manager 46 composer create 299 availability of resources 32 compressed form 146 compression 146 computer-processing tasks 33 B conman 329 backup domain manager 84–85 conman start command 333 overview 55 Connector 23, 50 Bad Symphony 104 connector instance 92 batch workload 4 connector instances batch-job skeletons 166 overview 94 batchman 57 Connector reference 343 batchman process 20 Connectors BATCHOPT 18, 196 overview 91 best restart step 46 Controlled z/OS systems 28 binary security file 329 © Copyright IBM Corp. 2004. All rights reserved. 353
  • 370. controller 2, 4–5 dataset triggering 36 controlling system 28 date calculations 32 conversion 301 deadline 20 correct processing order 32 deadline time 19–20 CP backup copy 127 decompression 146 CP extend 104, 340 default user name 208 CP REPLAN 340 delete a dataset 45 CPU class definitions 300 dependencies 2 CPU type 190 file 88 CPUACCESS 190 dependencies between jobs 41 CPUAUTOLNK 190 dependency CPUDOMAIN 20, 190 job level 89 CPUFULLSTAT 191, 307 job stream level 89 CPUHOST 190 dependency object 6 CPULIMIT 193 dependency resolution 22 CPUNODE 20, 190 developing extended agents 276 CPUOS 20, 190 direct TCP/IP connection 20 CPUREC 20, 175, 184 Distributed topology 68 CPUREC definition 20 DM CPUREC statement 20 See domain manager CPURESDEP 191 documentation 117 CPUSERVER 192 domain manager 6–7, 21, 53, 337 CPUTCPIP 190, 337 overview 55 CPUTYPE 20, 190 domain topology 184 CPUTZ 193 DOMPARENT parameter 198 CPUTZ keyword 194 DOMREC 85, 175, 184 CPUUSER 194 download a centralized script 73 Create Instance 254 dummy end 89 critical jobs 48 dummy jobs 334 cross-system coupling facility 30 dummy start 89 current plan 4, 12 dumpsec 328 customize script 209 DVIPA VIPARANGE 136 customized calendars 32 Dynamic Virtual IP Addressing customizing See DVIPA DVIPA 305 Dynix 209 IBM Tivoli Workload Scheduler for z/OS backup engines 304 Job Scheduler Console 255 E e-commerce 2 security file 325 ENABLELISTSECCHK 180 Tivoli environment 246, 344 Ended-in-error 73 Tivoli Workload Scheduler for z/OS 162 end-to-end database objects work directory 170 overview 69 cutover 279 end-to-end enabler component 115 end-to-end environment 20 D end-to-end event data set 170 daily planning batch jobs 273 end-to-end fail-over scenarios data integrity 49 303 database changes 76 END-TO-END FEATURE 163 354 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 371. end-to-end network 14, 17, 20 Event Manager 66 end-to-end plans extend long-term plan 38 overview 75 Extend of current plan 40 end-to-end scheduling xi, 3–4, 7–8, 146 extend plan 213 conversion process 289 extended agent 7, 190 creating Windows user and password definitions overview 55 272 extended agent method 55 education 298 extended plan 86 file dependencies 331 extension of the current plan 39 firewall support 20 external dependency 35 guidelines for coversion 299 implementation scenarios 265 migrating backwards 288 F fault tolerance 6, 14, 56, 322 migration actions 278 fault-tolerant agent 5–7, 20, 22, 58, 64 migration checklist 277 backup 319 migration planning 276 definitions 69 our environment 266 fault-tolerant workstation 22 password consideration 285 installing 279 planning 111 local copy 22 previous release of OPC 158 naming conventions 146 rationale behind conversion 112 overview 55 run number 216, 332 security 323 TCP/IP considerations 129 fault-tolerant architecture 129 tips and tricks 331 fault-tolerant job 18 verify the conversion 299 fault-tolerant workstation 216 what it is 2 file dependencies 88 end-to-end scheduling network 50 file transfer 33 end-to-end scheduling solution 3 filewatch options 331 end-to-end script library 128 filewatch program 88 end-to-end server 10, 61–63, 65 filewatch.sh 331 end-to-end topology statements 174 final cutover 287 EQQ prefix 203 firewall environment 20 EQQA531E 286 FIREWALL option 20 EQQMLOG 205–206 firewall support 20 EQQPARM members 317 firewalls 113 EQQPCS05 job 172 first-level domain manager 8, 62, 133 EQQPDF 119 fix pack 140 EQQPDFEM member 123 FixPack 04 151 EQQPT56W 206 JSC 150 EQQSCLIB DD statement 9 forecast future workloads 37 EQQSERP member 247 FTA EQQTWSCS dataset 11 See fault-tolerant agent EQQTWSIN 65, 127 FTP 288 EQQTWSOU 127, 168 FTW jobs 234–235 EQQUX001 11, 285 EQQUX002 282 EQQWMIGZ 282 G establish a reconnection 133 gcomposer 153 event files 57 gconman 153 Index 355
  • 372. General Service 66 318 generation data group 46 backup domain manager 147 globalopts 172 benefits of integrating with ITWS for z/OS 7 GRANTLOGONASBATCH 180 central repositories 322 creating TMF Administrators 257 database files 6 H definition 7, 22 HACMP 275 dependency resolution 22 Hewlett-Packard 5 engine 22 HFS Extended Agents 7 See Hierarchical File System fault-tolerant workstation 22 High Availability Cluster Multi-Processing four tier network 53 See HACMP installing 207 high availability configurations installing an agent 207 Configure backup domain manager 306 installing and configuring Tivoli Framework 245 Configure dynamic VIPA 305 installing Job Scheduling Services 253–254 DNS 303 installing multiple instances 207 hostname file 303 introduction 5 IBM Tivoli Workload Scheduler for z/OS backup Job Scheduling Console 2 engine 303 maintenance 156 stack affinity 303 master domain manager 58 VIPA 303 MASTERDM 22 VIPA definitions 306 monitoring file systems 321 HIGHDATE() 248 multi-domain configuration 52 highest return code 233 naming conventions 146 home directory 209 network 6 host CPU 191 overview 5 host jobs 18 plan 5, 58 host name 131 processes 57 hostname parameter 133 production day 58 Hot Standby function 30, 130 scheduling engine 22 housekeeping job stream 38 script files 322 HP Service Guard 275 security files 323 HP-UX 209 single domain configuration 51 HP-UX PA-RISC 254–255, 261 software ordering 116 terminology 21 I unison directory 208 IBM 22–23 UNIX code 23 IBM AIX 254–255, 261 user 209 IBM mainframe 3 IBM Tivoli Workload Scheduler 8.1 10 IBM Tivoli Business Systems Manager 276 IBM Tivoli Workload Scheduler 8.1 suite 2 IBM Tivoli Enterprise Console 321 IBM Tivoli Workload Scheduler connector 91 IBM Tivoli Management Framework IBM Tivoli Workload Scheduler Distributed 10, 14, overview 90 22 IBM Tivoli Monitoring 321 overview 5 IBM Tivoli Workload Scheduler 5–6 IBM Tivoli Workload Scheduler for z/OS 5 architecture 6 architecture 4 auditing log files 321 backup engines 303 backup and maintenance guidelines for FTAs benefits of integrating with ITWS 7 356 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 373. controller 4, 23 IBM Tivoli Workload Scheduler for z/OS 8.2 creating TMF administrators 257 centralized control 7 database 4 Centralized Script Library Management 10 end-to-end dataset allocation 168 enhancements 8 end-to-end datasets and files improved job log retrieval performance 10 EQQSCLIB 68 improved SCRIPTLIB parser 9 EQQTWSCS 67 multiple first-level domain managers 8 EQQTWSIN 67 recovery actions available 15 EQQTWSOU 67 recovery for not centralized jobs 14 intercom.msg 67 Return Code mapping 18 Mailbox.msg 67 security enhancements 20 NetReq.msg 67 variable substitution for not centralized jobs 17 Sinfonia 67 IBM Tivoli Workload Scheduler for z/OS connector Symphony 67 92 tomaster.msg 67 IBM Tivoli Workload Scheduler processes Translator.chk 68 batchman 57 Translator.wjl 68 intercommunication 57 end-to-end FEATURE 163 jobman 57 engine 23 mailman 57 EQQJOBS installation aid 162 netman 57 EQQPDF 119 writer 57 fail-over scenarios 303 identifying dependencies 32 HFS Installation Directory 163 idle time 5 HFS Work Directory 163 impersonation support 276 hot standby engines 303 incident dataset 47 installing 159 input datasets 168 introduction 4 installing long term plan 7 allocate end-to-end datasets 168 overview 4 connectors 254 PSP Upgrade and subset id 118–119 END-TO-END FEATURE 163 Refresh CP group 164 FTAs 207 server processes IBM Tivoli Workload Scheduler for z/OS 159 batchman 64 Installation Directory 163 input translator 65 Job Scheduling Console 261 input writer 65 Job Scheduling Services 253 job log retriever 65 latest fix pack 140 mailman 64 multiple instances of FTAs 207 netman 63 OPC tracker agents 117 output translator 66 Refresh CP group 164 receiver subtask 66 service updates 117 script downloader 65 TCP/IP Server 246 sender subtask 66 Tivoli Management Framework 253 starter 65 Tivoli Management Framework 3.7B 253 translator 64 Tivoli Management Framework 4.1 252 writer 64 User for OPC address space 164 switch manager 318 Work Directory 163 tracker 4 instance 347 user for OPC address space 164 in-stream JCL 45 VIPA 303 Intercom.msg 64 Index 357
  • 374. internal dependency 35 overview 91 INTRACTV 15 See JSS IRIX 209 job statistics 5 ISPF 11 job stream 32–33, 36, 69, 81, 331 ISPF panels 5 job stream run cycle 40 Itanium 209 job submission 4 ITWS job tailoring 42 See IBM Tivoli Workload Scheduler job tracking 41 ITWS for z/OS job_instance_output 98 See IBM Tivoli Workload Scheduler for z/OS JOBCMD 11, 15 job-completion checking 47 jobdef.txt 300 J job-level restart 46 Java GUI interface 5 JOBLIB 11 Java Runtime Environment Version 1.3 262 jobman 57 JCL 282 JOBREC parameters 16 JCL Editing 11 JOBSCR 15 JCL variables 37, 221 jobstream.txt 300 JES2 30 JOBUSR 15 JES3 30 JOBWS 15 job 2, 89, 94 JSC job control process 57 See Job Scheduling Console Job Instance Recovery Information panel 16 JSC migration considerations 151 job log retriever 10 JSC server 89 Job Migration Tool 282 JSC server initialization 247 job return code 19 JSCHOSTNAME() 248 job scheduling 2 JSS 23 Job Scheduling Console 2, 5, 11 JTOPTS statement 169 add users’ logins 257 JTOPTS TWSJOBNAME 200 availability 153 Julian months 37 compatibility 151 creating connector instances 255 creating TMF administrators 257 L documentation 150 Language Environment 135 fix pack 150 late job handling 19 hardware and software prerequisites 262 legacy GUI 153 installation on AIX 263 legacy ISPF panels 80 installation on Sun Solaris 263 legacy system 2 installation on Windows 263 Linux Red Hat 261 installing 261 Linux Red Hat 7.1 254–255, 261 installing Job Scheduling Services 253 localopts 146, 336–337 Job Scheduling Console, Version 1.3 261 localopts file 21 login window 264 LOGLINES 181 migration considerations 151 long-term plan 4, 37–38 overview 89 long-term plan simulation reports 38 required TMR roles 260 long-term switch 308 Job Scheduling Console commands 28 loop 15 Job Scheduling Console, Version 1.3 261 loss of communication 6 Job Scheduling Services LTP Modify batch job 76 358 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 375. M netman 57 Maestro 22 Netman configuration file 172 maestro_database 98 netman process 63 maestro_engine 97 NetReq.msg 63 maestro_plan 97 NetView for OS/390 44 maestro_x_server 98 network listener program 57 Mailbox.msg 64 network traffic 52 mailman 57 network traffic generated 56 mailman cache 144–145 network writer process 57 maintenance strategy 156 new functions related to performance and scalability maintenance windows 34 overview 8 Maintenence release 207 new input writer thread 10 makesec command 329–330 new job definition syntax 14 managed node 91 new operation 12 management hub 51, 55 new parser 10 manual control 42 new plan 58 manual editing of jobs 42 newly allocated dataset 169 manual task 33 next production day 33 master 6, 22 nm ipvalidate=full 338 master domain manager 6, 22, 56, 60, 131, 143 nm ipvalidate=none 338 overview 54 NM PORT 337 MASTERDM 22 NOERROR functionality 18 MCP dialog 17 non-centralized script 72 MDM Symphony file 53 overview 72 message management process 57 non-FTW operations 12 migrating backwards 288 non-mainframe schedulers 7 migration NOPTIMEDEPENDENCY 181 actions 278 Normal Mode Manager 66 benefits 274 notify changes 60 planning 276 NT user definitions 328 planning for 155 migration checklist 277 O migration tool 285 Object attributes 325 missed deadline 19, 35 OCCNAME 200 mm cache mailbox 145 offline workstations 332 mm cache size 145 offset 37 Modify all 40 Old Symphony 104 mozart directory 172 old tracker agent jobs 11 MSCS 318 OPC 4–5 multiple calendar function 36 OPC connector 92 multiple domains 56 OPC tracker agent 11, 117 multiple tracker agents 280 opc_connector 96 opc_connector2 96 N OPCMASTER 8, 21, 207 national holidays 36 OPCOPTS 304 NCP VSAM data set 170 OpenSSL toolkit 21 nested INCLUDEs 46 Operations Planning and Control Netconf 172 See OPC Index 359
  • 376. operations return code 18 See PSP operator instruction 37 primary domain manager 8 operator intervention 5 printing of output 34 Oracle 2, 7 priority 0 335 Oracle Applications 55 private key 21 organizational unit 56 processing day 5 OS/390 4, 7 production control process 57 OS/400 22 production day 6, 58 oserv program 97 production schedule 4 OSUF 73 Program Directory 118 out of sync 255 Program Interface 5, 47 output datasets 168 PSP 118 overhead 146 PTF U482278 253 own copy of the plan 6 R P R/W mode 127 Parallel Sysplex 29 r3batch 98 parallel testing 286 RACF user 257 parent domain manager 22 RACF user ID 250 parms command 302 range of IP address 136 partitioned dataset 128 RCCONDSUC 15, 19 PDSE dataset 11 recovery actions 15 PeopleSoft 55 recovery information 17 performance bottleneck 8 recovery job 15 performance improvements over HFS 126 recovery of jobs 45 performance-related parameters recovery option 240 mm cache size 145 recovery process 5 sync level 145 recovery prompt 15 wr enable compression 146 recovery prompts 88 periods (business processing cycles) 36 RECOVERY statement 15, 17 pervasive APARs 118 Red Hat 7.2 275 PIF Red Hat 7.3 275 See Program Interface Red Hat Linux for S/390 254 PIF applications 31 Redbooks Web site 350 plan 37 Contact us xiii plan auditing 321 Refresh CP group field 165 plan extension 85 remote panels 28 plan file 22 remote systems 30–31 plan process 37 remote z/OS system 30 PLANAUDITLEVEL 181, 321 Removable Media Manager 46 pobox directory 172 repeat range 89 port 31111 63 replan 213 port number 182 reporting errors 47 PORTNUMBER 182 re-queuing SYSOUT 47 predecessor 4, 331 rerun a job 45 predefined JCL variables 37 rerun from 335 predefined periods 36 rerun jobs 89 preventive service planning RESERVE macro 169 360 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 377. reserved resource class 49 short-term switch 308 resource serialization support 36 Sinfonia 79, 146 resources 36 Sinfonia file 146 resources availability 39 single point of control 7 restart and cleanup 45 size of working directory 183 return code 19 slow WAN 146 Return Code mapping 18 SMF 30 return code mapping software ordering details 115 overview 18 special resource dependencies 71 rmstdlist 320 special resources 36 roll over 6 special running instructions 37 rolling plan 39 SSL protocol 21 routing events 64 SSLLEVEL 182, 194 rule 37 SSLPORT 182 run cycle 32, 36 standard agent 55 overview 55 standby controller engine 84 S standby engine 29, 84, 86 SAF interface 49 start and stop commands 89 SAGENT 190 start of each day 6 sample security file 327 started tasks 61 SAP R/3 55 starter process 273 SAP R/3 extended agent 98 StartUp command 57 SAP/3 2 start-up messages 206 scalability 8 status information 30 schedule 81 status inquiries 42 scheduling 2 status of all jobs 6 scheduling engine 22 status of the job 59 script library 83 status reporting 47 SCRIPTLIB 11 stdlist 337 SCRIPTLIB library 10 stdlist files 126, 164 SCRIPTLIB parser 9 step-level restart 46 SCRPTLIB dataset 9 steps of a security check 326 SCRPTLIB member 17 submit jobs 4 SD37 abend code 169 submitting userid 139 security 48 subordinate agents 22 security enhancements subordinate domain manager 58 overview 20 subordinate FTAs 6 security file stanzas 325 subordinate workstations 60 SEQQMISC library 123 substitute JCL variables 231 serialize work 36 success condition 18 server 10, 23 successful condition 19 server started task 174 SuSE Linux Enterprise Server 254 service level agreement 38 SuSE Linux Enterprise Server for S/390 254 service updates 117 switch manager 318 Set Logins 258 Switching domain manager 85, 313 Set TMR Roles 260 backup manager 308 shared DASD 30 long-term switch 308 shell script 321 short-term switch 308 Index 361
  • 378. using switchmgr 313 time depending jobs 39 using the Job Scheduling Console 310 time of day 32 verifying 314 time zone 194 switching domain manager tips and tricks 331 using WSSTAT 313 backup and maintenance guidelines on FTAs switchmgr 313 318 Symbad 104 central repositories 322 Symnew 341 common errors for jobs 334 Symold 104 dummy jobs 334 Symphony file 10, 17, 20, 52, 57–58, 82–83, 182, file dependencies 331 216, 321 filewatch.sh program 331 creation 58 job scripts in the same directories 334 distribution 59 monitoring example 321 monitoring 59 monitoring file systems 321 renew 273 plan auditing 321 run number 68 script files 322 sending to subordinate workstations 64 security files 323 switching 67 stdlist files 319 troubleshooting 340 unlinked workstations 332 update 59 useful Tivoli Framework commands 348 Symphony file creation time 10 Tivoli administrator ID 250 Symphony file generation Tivoli Framework commands 348 overview 80 Tivoli managed node 28 Symphony renew 104 Tivoli Managed Region Symphony run number 68, 333 See TMR SymUSER 79 Tivoli Management Environment 90 SymX 104 Tivoli Management Framework 23, 257 sync level 145 Tivoli Management Framework 3.7.1 253 synchronization 86 Tivoli Management Framework 3.7B 253 sysplex 7, 164, 303 Tivoli object repository 91 System Authorization Facility 49 Tivoli server 28 System Automation/390 317 Tivoli Workload Scheduler for z/OS System Display and Search Facility 46 overview 4 system documentation 117 Tivoli-managed node 264 System SSL services 21 TMR database 91 TMR server 91 top-level domain 22 T TOPLOGY 76 TABLES keyword 18 topology 56, 68 TCP/IP considerations 129 topology definitions 76 Dynamic Virtual IP Addressing 135 topology parameter statements 69 stack affinity 134 TOPOLOGY PORTNUMBER 337 use of the host file 133 TOPOLOGY statement 21, 178 TCP/IP link 28 TPLGYMEM 183 TCPIPJOBNAME 183 TPLGYPRM 177 temporarily store a script 11 tracker 4 terminology 21 tracker agent enabler component 115 tier-1 platforms 210 tracker agent jobs 287 tier-2 platfoms 209 tracker agents 118 362 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 379. training 287 USRREC definition 336 translator checkpoint file 104 USS 20, 23, 124 translator log 105 USS workdir 21 Translator.chk 104 UTF-8 to EBCDIC translation 65 Translator.wjl 104 TRCDAYS 183 trial plans 37 V variable substitution and Job Setup 11 trigger 331 Variable Substitution Directives 17 troubleshooting VARSUB statement 17 common errors for jobs on fault-tolerant worksta- verification test 159 tions 334 Version 8.2 enhancements DIAGNOSE statements 341 overview 10 E2E PORTNUMBER and CPUTCPIP 336 view long-term plan 38 EQQPT52E 340 VIPA 303 handling errors in script definitions 334 VIPARANGE 136 handling offline or unlinked workstations 332 virtual IP address 135 handling wrong password definition for Windows VSAM dataset 68, 127 FTW 336 VTAM Model Application Program Definition feature message EQQTT11E 339 46 problems with port numbers 336 SERVOPTS PORTNUMBER 339 TOPOLOGY PORTNUMBER 338 W Tru64 UNIX 209 Web browser 2 trusted certification authority 21 weekly period 36 TSO command 5 Windows 22 TSO parser 10 Windows 2000 Terminal Services 261 TSO rules 11 Windows clustering 275 TWSJOBNAME parameter 169 wlookup 348 wlookup -ar PatchInfo 348 wmaeutil 348 U wmaeutil command 329 uncatalog a dataset 45 wookup -ar ProductInfo 348 Unison Maestro 5 wopcconn command 345 Unison Software 3, 5 work directory 164 UNIX 22 workfiles 125 UNIX System Services workload 2 See USS workload forecast 7 unlinked workstations 332 Workload Manager 48 unplanned work 47 Workload Manager interface 48 upgrade workload priorities 32, 41 planning for 155 workstation 34 user attributes 325 wr enable compression 146 users.txt 300 wrap-around dataset 169 using dummy jobs 334 writer 57 USRCPU 196 writing incident records 47 USRMEM 183 WRKDIR 183 USRNAM 196 WSCCLOG.properties 172 USRPWD 196 WSSTAT command 306 USRREC 175, 184 wtwsconn.sh command 256, 347 Index 363
  • 380. wuninst 348 X XAGENT 190 X-agent method 98 Y YYYYMMDD_E2EMERGE.log 105 Z z/OS environment 71 z/OS extended agent 7 z/OS host name 132 z/OS job 2 z/OS security 250 z/OS/ESA Version 4 Release 1 304 zFS clusters 126 364 End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2
  • 381. End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2 (0.5” spine) 0.475”<->0.875” 250 <-> 459 pages
  • 384. Back cover ® End-to-End Scheduling with IBM Tivoli Workload Scheduler V 8.2 Plan and implement The beginning of the new century sees the data center with a your end-to-end mix of work, hardware, and operating systems previously INTERNATIONAL scheduling undreamed of. Today’s challenge is to manage disparate TECHNICAL environment systems with minimal effort and maximum reliability. People SUPPORT experienced in scheduling traditional host-based batch work ORGANIZATION must now manage distributed systems, and those working in Experiment with the distributed environment must take responsibility for work real-life scenarios running on the corporate OS/390 system. BUILDING TECHNICAL Learn best practices This IBM Redbook considers how best to provide end-to-end INFORMATION BASED ON and troubleshooting scheduling using IBM Tivoli Workload Scheduler Version 8.2, PRACTICAL EXPERIENCE both distributed (previously known as Maestro) and mainframe (previously known as OPC) components. IBM Redbooks are developed by the IBM International Technical In this book, we provide the information for installing the Support Organization. Experts necessary Tivoli Workload Scheduler 8.2 software from IBM, Customers and components and configuring them to communicate with each Partners from around the world other. In addition to technical information, we consider create timely technical various scenarios that may be encountered in the enterprise information based on realistic scenarios. Specific and suggest practical solutions. We describe how to manage recommendations are provided work and dependencies across both environments using a to help you implement IT single point of control. solutions more effectively in your environment. We believe that this book will be a valuable reference for IT specialists who implement end-to-end scheduling with Tivoli Workload Scheduler 8.2. For more information: ibm.com/redbooks SG24-6624-00 ISBN 073849139X