Quantcast
Channel: WORKLOAD AUTOMATION COMMUNITY - Blogs
Viewing all 148 articles
Browse latest View live

Towards Business Process Management: 5 good reasons why you want to integrate your RPA bot with HCL Workload Automation

0
0
​​Think how HWA and RPA together can help your business to:


​​REDUCE COSTS
INCREASE PRODUCTIVITY
GAIN REAL END-TO-END CONTROL
INCREASE SLA COMPLIANCE
STAND UP IN COMPETITIVENESS

Forewords: is Automation the ultimate answer?

IT automation is nowadays pervading the market, especially in conjunction with virtualization. Hence, one may be tempted to consider those two initiatives as the magical solution to reduce IT costs.

​They may indeed, but let’s start with confirming and arguing few simple statements:
​So, should companies go towards automation and virtualization?

Definitely yes. Wisely.

Many current processes are the results of years of experience, acquired when tasks were made by people, and are possibly the best expression of a human organization. They may not necessarily make sense, though, when IT automation comes into play.

One example:

Fulfilling a purchase order from your customers, may take several people to perform tasks and revise data for consistency and correctness.

​It is evident that many of such tasks are repetitive and “purely-algorithmic”. Hence, besides automating the process, you can simplify it since many steps may not be required at all once you get rid of the human-error factor.

​The following picture gives the idea in a simplified way.
Figure 1 - Processes can be optimized before automating
What is Automation (and RPA and Workload Automation) by the way?
Automation, by itself, is a generic term that can be given a variety of meanings (see for example this post on this community).

Without exploring the many possible interpretations of the term, we will focus here on two subdomains:

Robotic Process Automation (RPA), intended as implementing robots that do things that were done, or could be done, by humans. 

Workload Automation (WA, or WLA), intended as the discipline of scheduling, controlling, coordinating (in a word orchestrating) the execution of IT processes that were never intended to be performed by humans anyway. We will call them batch processes here (please forgive the vagueness of this term).

What’s the difference?  A person can actually fill in a web form, typing on the keyboard, and this a typical task that can be done by RPA. But nobody actually copies the content of a database into another, at the most people would launch the operation and wait for it to complete. 

Why both RPA and Workload Automation?

By the simple explanation above, the golden question rises spontaneously:

 Can a good RPA tool give me also Workload Automation capabilities? 
After all, one of the human actions it emulates could be launching batch processes…


The short, simple and sincere answer is “sorry, but no”.

The reason is that Workload Automation goes far beyond the mere starting and checking completion of batch jobs.

There is so much science behind orchestrating IT workflows to deserve the existence of enterprise-grade products like HCL Workload Automation (HWA), whose features allow a comprehensive control of the digital business.

​These features vary from sophisticated run-cycle and calendar management, to predictive analysis, to automatic recovery actions, to critical path management and SLA reassurance, just to mention few.

There is no RPA tool that can do that, especially on a large scale, and there is no need to have one, given the existence of HWA.

On the other hand, RPA tools also have developed a lot, including things like text and voice recognition, document management, natural language interfaces, and so forth.

RPA tools can also be specialized in specific industries or business needs, like
  • Chatbots
  • Mail processing
  • Doc management
  • Application testing
  • Others

For the above reasons, RPA and Workload Automation have minimal or no overlap at all, and both can be leveraged in conjunction for automation purposes.

Integrating HWA, RPA and the cloud
A complete business transaction may encompass interactions among humans, some of which can be replaced by RPA and some may instead be mandatorily assigned to somebody, together with batch transactions.

Integrating HWA with an RPA tool provides end-to-end control and visibility, as well as bringing considerable benefits in term of costs and productivity.

The following use cases are not exhaustive, but illustrate such benefits, specifying where virtualization can be involved effectively:

Invoking HWA from RPA

Fulfilling an online request that requires batch executions, requires RPA to trigger HWA workflows.

​The following is an example that illustrates the idea:
Figure 2 - Schematic flow RPA calling HWA
​The above example can be extended to cover other use cases like:
  • Processing Purchase Orders received by email
  • Processing Reimbursement requests received by email or by fax (Healthcare industry)
  • Processing e-commerce requests
  • Others

Invoking RPA from HWA
Many IT flows require human intervention when
  • The process requires special actions that cannot be (or must not be) automated, like signing a document, or authorizing a specific flow
  • The process encounter exceptions that require human attention, like processing files that are coming in non-standard format
Following examples illustrate the idea:
Figure 3 – Scenario highlighting a HWA aspect

The above scenario also highlights an important aspect of HWA, that is the ability to commission and decommission virtual resources based on dynamic conditions, which is important also when integrating with RPA, since RPA itself can be instantiated and removed on a per-need basis, optimizing virtual resources costs.

Conclusions
Far from believing this was an exhaustive discussion, I am still confident that the examples and considerations exposed allow us to list 5 reasons why you may want to integrate HWA with your RPA tool:
​​
REDUCE COSTS:
  • Being able to minimize virtual resources and RPA usage, which results in lower infrastructure costs and RPA license utilization.
  • Also by triggering HWA job-streams after doing some AI screening in RPA will help optimizing HWA license utilization.
INCREASE PRODUCTIVITY:
  • Avoid waiting time between transactional and batch operations.
  • Optimize parallel executions in a unique flow that encompasses both RPA and HWA.
GAIN REAL END-TO-END CONTROL:
  • By automating as much as you want your complete flow, while keeping the required point of control for human intervention or supervision.
INCREASE SLA COMPLIANCE:
  • HWA predictive analysis, SLA control, and reporting features can apply to RPA flows triggered by HWA.
STAND UP IN COMPETITIVENESS:
  • Decreasing operational IT costs.
  • Improving productivity.
  • Guaranteeing end user and customers a seamless experience with your IT processes

Want to learn more? Contact me  or schedule a demo of HCL Workload Automation !

Cristiano Plini
HWA Sales Specialist 

Cristiano is part of the Sales Team of HCL Workload Automation. He has sound experience in R&D, consultancy  and sales of IT solutions, gained in +25 years working for worldwide enterprise industries in Europe, North America, and Latin America.

Back to the future: how IBM Z Workload Scheduler integration in Zowe is bringing the workload automation into the modern age

0
0
The following blog post is a sneak peek of what you will be able to do with Zowe, starting the end of April.
If you can't wait and want to know more drop a line at our Product Manager emanuela.zaccone@hcl.com 
 

85% of 1,400 IT professionals agreed about mainframe skills gap. Around 18% of mainframe staff planned to retire within five years. IBM predicts that approximately 37,200 new mainframe administration positions will emerge worldwide by 2020. Are you facing with the challenge to have young talent on board struggling in the interaction with mainframe?
72% of customer-facing applications are completely or very dependent on mainframe processing. Do you have the feeling that mainframe is a black box hard to integrate in hybrid cloud applications?

Recurring questions like these, helped re-thinking the future of mainframe in order to propose new ways to interact with mainframe These have been regular questions in conversations about the future of mainframe that have permitted to explore new solutions for the so-called Gen Z.

What is IBM Z Workload Scheduler?

IBM Z Workload Scheduler (IZWS) brings the power of Workload Automation on the mainframe, allowing users to centrally control all the automation processes. Using IZWS, customers can automate workloads in an hybrid cloud environemnt while leveraging a solid, reliable technology.

IZWS allows customers to use a single point of control and monitoring of complex workflows on hybrid-cloud environments, while making the most out of predictive capabilities, automatic resolution of interdependencies and out-of-the-box integrations such as SAP, PeopleSoft, DataStage, Hadoop and more. IZWS is the ultimate meta-orchestrator for continuous automation, leveraging containerization for the distributed components with an intuitive user interface, while offering the lowest TCO on the market.
IZWS + Zowe: together to do more on the mainframe

IBM Z Workload Scheduler is modernizing the mainframe approach, providing its capabilities in Zowe to facilitate the operations for mainframe users. Zowe consumers need to submit services and check their status, performing several actions on their business processes: submit a job, get list of jobs, retrieve the content of a job file output and many more.

In this landscape IZWS offers a variety of commands based on REST API in Zowe, to manage and orchestrate all services required in a DevOps scenario:

Application developers - creation and execution of all the integration tests needed to validate their new applications.
Operations  - deployment, release and monitoring of these applications.

IZWS offers a new plug-in in Zowe CLI: it allows to quickly and easily access IZWS services in an ecosystem where many mainframe common tools are available. It enable young developers and system programmers to easily get onboard and work on mainframe. 

IZWS commands exposed in Zowe make possible the automation of all those services needed to application developers and operators to run, test, deploy and monitor business services in different environments.
ZOWE conformance badge

"The Zowe Conformance Program aims to give users the confidence that when they use a product, app, or distribution that leverages Zowe they can expect a high level of common functionality, interoperability and user experience."
IBM Z Workload Scheduler plug-in for ZOWE CLI got the ZOWE conformance badge.​

How IBM Z Workload Scheduler integrates with Zowe CLI

Our goal is to simplify and speed up the daily work for both application developers and operators by providing them with a complementary and practical interface that can be quickly adopted by all generation of mainframers.
To achieve this, we leveraged our Transparent Development program, creating a priority list of commands to be integrated in Zowe, among the most rated from IZWS Customers.

The IZWS Plugin provides a new set of commands under the main group workload automation wa as follows:

execute | exec Execute WAPL commands. Workload Automation Programming Language is a programming language that can cover multipurpose scenarios (Including actions available uniquely through WAPL).
get Retrieve the specifics of resources in plan.
list | ls Monitor of jobs, jobstreams (application) and resources in plan, list jobstreams in model (database).
submit | sub Submit/add dynamically a jobstream into the plan.
update Update resources in plan.

All wa commands provides additional information about the jobs, jobstreams or resources when response format JSON parameter is specified.

One extra feature that IZWS exposes as plug-in is the possibility to configure the same as Zowe CLI default credential manager.

As part of our continuous integration process it is our purpose to keep expanding IZWS plugin with additional commands, exposing new features for Zowe users.

Use Case: crisis management resolution

On a daily basis, operators process tickets for issues in PRODUCTION environment. In order to solve a critical situation, some of these tickets include request for submission of recovery or rerun procedures. These applications may require input parameters for JOB, time dependency conditions such as specific Start Time. These processes can be submitted on hold and the execution is triggered when external conditions are satisfied.

In this example we show how an operator can speed up the completion of his assigned tickets by exploiting wa command in a simple script.

1. Submission of a recovery application
In this example the submit.sh script read the jobstream specifics from a Ticket properties file and invokes the following wa commands:

submit jobstreaminplan to add the jobstream to the plan. This command can leverage the possibility to submit the application without running it until an external condition is satisfied and the capacity to provide JOB variables to be substituted with given values once the jobs runs.
query jobstreaminplan to monitor the jobstream that has been submitted.

The script for the submission, automatically moves the ticket in the ready to close folder once the application has been submitted and verified.

2.Release of an application
In this example the release.sh invokes the following wa command to release a jobstream:

​exec wapl to execute wapl statements that perform the release of the application. This command can be run directly from the command line or by taking in input a local file such as ticket file in the example above.
query jobstreaminplan to monitor the jobstream that has been released.

3.Monitoring the status of the jobs within a jobstream
In this example the following wa command is invoked from command line to monitor the jobs:

​list jobinplan to obtain a list of the status and other information such as the start time and workstation of the jobs within an application.

Take Away

Enablement of real DevOps scenarios.
Automation and Scheduling of all processes available on the mainframe system


Easy access and use of resources and process of IBM Workload Automation on mainframe.


Simple and quick integration
​with other mainframe product that expose plug-in on ZOWE



Watch IZWS + ZOWE video​​​​

Federica Gari, IZWS & IWS Techsales

Started working for IBM in 2001 at the IBM Software Laboratory in Rome. She started her career in the L3 Customer Support department for NetView Access Service and Tivoli Decision Support. During these years, she has held various roles in the product development lifecycle.  Two of those years, she spent with the development team for IBM Tivoli Workload Scheduler. She then became the L3 Customer Support team leader for the same product. She has covered also the role of Transparent Development Program leader for the entire IBM Workload Automation family. She worked in the L2 Customer Support team in HCL company. Her current role Workload Automation Techsales in the HCL Service offering team. She hold's a master's degree in Mathematics.  
Eva Anahi Berrio, Software Engineer

With an important background as Mainframe Application Programmer (COBOL, CICS and DB2) she has joined the IZWS team in 2019 at HCL Products and Platforms in Rome software development laboratory.
Davide Canalis, Software Engineer

Davide is graduated in Mathematics, works as a Software Engineer at HCL Products and Platforms in Rome software development laboratory and he is a member of z/OS development IZWS team since April 2017 becoming the RestApi expert of the team.

​BM Z Workload Scheduler: https://www.ibm.com/us-en/marketplace/tivoli-workload-scheduler-for-z-systems
 
IBM + HCL Transparent Devolpment Program: http://www.workloadautomation-community.com/blogs/ibm-hcl-workload-automation-transparent-development-program
 
Workload Automation Youtube Channel:  https://www.youtube.com/channel/UC_R7b8BDt_qaRdzuxczBM1Q
 
Zowe Conformance Program: https://www.openmainframeproject.org/projects/zowe/conformance
 
Zowe Official Website:   https://www.zowe.org/
 
Zowe Github Repo:   https://github.com/zowe
 
Open Mainframe Project Official Website:  https://www.openmainframeproject.org/
 
Open Mainframe Youtube Channel:   https://www.youtube.com/channel/UC-WTXQQtz2m5iTflJLK59aw
 
Open Mainframe Project Slack Channel:   https://openmainframeproject.slack.com

No pain, only gain:  smooth migration of your business to next generation automation

0
0
​Workload Automation (WA) simplifies and accelerates application delivery.
It automates and orchestrates application workflows across hybrid environments to satisfy SLAs and improve efficiency and productivity.
​The top 7 reasons for replacing your scheduling solution with Workload Automation

  1. Manages hybrid workflows from a single point of control.
  2. Provides a very easy infrastructure to build complex workflows with more than 35 out-of-the-box integrations.
  3. It’s highly scalable. It handles millions of transactions in a day.
  4. Automates every step between writing code and deploying applications to customers.
  5. Runs end-to-end workflows across multiplatform environments and containers.
  6. Offers a comprehensive catalog of REST API, which allow a flexible integration with a variety of applications .
  7. Enables DevOps collaboration with Zowe, on the zOS platform.
 
Replacing your automation solution with Workload Automation is easy and secure.
Take advantage of HCL Center of Excellence in migrations from competition to HWA.
The expertise and experience of HCL migration specialists speed up the time to transition and realize a ROI faster.
 
What customers think of HCL WA migration projects 
  • Accelerated migration with zero business impact and minimal downtime.
  • Assurance that the workflow behaves as expected and that the same conditions of the legacy solution are respected within Workload Automation.
  • Complete control of the migration project on the customer side: after each step, the flexibility to go back to the previous condition to find the best alternative option and detailed plan for the critical cutover period to the new Workload Automation solution.
  • Identification of new Workload Automation capabilities that increase performance, reliability, serviceability, and scalability.
 
Major competitor solutions for scheduling and automation are covered with a tool that automates the migration minimizing manual intervention.
Manual intervention is required to assess specific customer requests about specific customizations made for the original scheduling solution.
​The HCL WA migration team automates this process with a migration tool, and accompanies you through the transition, every step of the way, with a proven 4-step methodology:
 
  1. Workshop: Definition of the migration project details and pre-requisites of the new architecture infrastructure. 
  2. WA deployment and education: Installation of the Workload Automation infrastructure and an initial education session to enable the customer to review and validate the migrated of data. 
  3. Data migration and validation. Go-Live: The converted data is sent to the customer in subsets of scheduling definitions for validation.  When all of the data has been migrated and approved by the customer, the legacy solution is dismissed and replaced with Workload Automation.​
  4. Post-education session: Some additional sessions are provided that perform a deep dive into specific product configurations and to satisfy additional customer requirements.
Your Take Away
HCL Center of Excellence in migrations offers high level of support and response during the transition phase, allowing accelerated migration with zero business impact and minimal downtime.
Migration project is designed to advantage Customer to reduce infrastructure and operational costs.
Are you ready to move to a solution recognized as leader in Workload Automation? Learn about our solution here.
 
And if you want to start with a migration consultancy contact us at HWAInfo@hcl.com or find out more about our Services here. 

Federica Gari, WA Techsales

Started working for IBM in 2001 at the IBM Software Laboratory in Rome. She started her career in the L3 Customer Support department for NetView Access Service and Tivoli Decision Support. During these years, she has held various roles in the product development lifecycle. Two of those years, she spent with the development team for IBM Tivoli Workload Scheduler. She then became the L3 Customer Support team leader for the same product. She has covered also the role of Transparent Development Program leader for the entire IBM Workload Automation family. She worked in the L2 Customer Support team in HCL company. Her current role Workload Automation Techsales in the HCL Service offering team. She hold's a master's degree in Mathematics.  
Riccardo Belloni, WA Service team

​Senior Workload Automation Consultant at HCL Technologies. Riccardo is part of the HCL Workload Automation services team. He is a Workload Automation expert who provides technical support to the customer to manage and create the desired solutions. Develops and provides technical education. His ability to change on various IT aspects was based on +8 years of IT work and consulting. He loves freediving and spearfishing, these passions took him, since 2014, to become a freediving instructor.
Giacomo del Vecchio, WA Service team 

Giacomo joined HCL in 2017 as a Software Engineer in the Workload automation Level 3 Support team, where he supported customers and worked on product’s fixpacks . He then moved to the Workload Automation Services team, where he helps customers in setting up Workload Automation solutions, integrations with other products and architecture designs for new implementations/upgrades. He’s a travel lover, really passionate about technology and programming, especially for what concerns topics like cloud computing, Linux and Python.
 

Case Study : Inventory Management with Workload Automation

0
0
This Blog aims to showcase a Case Study on how Inventory Management can be managed through Workload Automation .A Company runs a production Plant where Inventory Management is to be Automated using Workload Scheduler. The Production Plant wants to be as close to Balanced plant as possible by reducing Inventory when in excess and increasing Inventory when low . 
The plant plans to achieve this through a Custom Script which runs the Material Requirements Planning for them and lists all Inventory needed on a Weekly Basis to process all Orders in the Manufacturing Plant. 

The Plant has four types of Inventory mainly : 
  1. MS SHEETS(qty) 
  2. MACHINE OIL(in Ltrs) 
  3. ANGLE BRACKETS(qty) 
  4. POLISH(in Ltrs) 
The plant aims to manage the above 4 Inventory Items and decide on whether to procure any Inventory additionally or sell the excess Inventory available. 

MATERIAL REQUIREMENTS PLANNING :  

​Materials requirements planning would be executed through a Custom Script to find out the Inventory level needed to process Orders in the Production plant for the week. 
 
The Inventory Levels needed would be indicated in terms of the below 4 Inventory Items which the company wants to track mainly and stay on top of them to decide on whether to purchase them additionally or sell excess inventory to reduce the Inventory Carrying Costs : 
 
Material Requirements planning is done through a Job 
MATERIAL_REQ_PLANNING ,  
The Script when executed gives an Output similar to the below in the Log File and computes the Inventory needed to process all Orders this week : 
 
[root@EU-HWS-LNX242 unixda]# cat production_mrp.log 
MS SHEETS: 14867 sheets 
Machine Oil: 14534 ltrs 
Angle Brackets: 31193 qty 
Polish: 16252 ltrs 
MS SHEETS: 5509 sheets 
Machine Oil: 21137 ltrs 
Angle Brackets: 25562 qty 
Polish: 12581 ltrs 
 
Entries generated each week keep getting updated to this log file. So , it indicates all 4 Inventory Items and the levels needed this week(in BOLD) inorder to use them for Order processing this week. 
 
CURRENT INVENTORY : 
 
The job CURR_INVENTORY computes the Current Inventory level in the Plant available this week , the current inventory Levels are again maintained in a log File ,so there is a Script that is called out from a job which fetches the current Inventory levels : 
The Output in the Log File of this Job would be similar to the following:
 
[root@EU-HWS-LNX242 unixda]# cat production_factory.log
MS SHEETS: 5674 sheets
Machine Oil: 8905 ltrs
Angle Brackets: 29775 qty
Polish: 3662 ltrs
MS SHEETS: 32355 sheets
Machine Oil: 6497 ltrs
Angle Brackets: 28658 qty
Polish: 1746 ltrs
 
Once we have the MRP Job and the Current Inventory Job run , we have the Current Inventory Levels and MRP Plan for the week in place , the next step is to use the Features of Workload Scheduler to parse the Joblogs and fetch the Inventory Levels from the Log Files directly and assign them to Workload Automation Variables , so you could then use Variable Passing Features of Workload Scheduler to then pass these variables to a comparison Script . 

The Comparison Script would compare the Values of Current Inventory Levels with MRP Inventory Levels and then pass conditions depending on the Outcome. Using the Conditional Dependencies of Workload Scheduler , we would be able to PASS different conditions as Output Conditions of the Comparison job , so the Comparison Job would either Pass “EXCESS INVENTORY” , “LOW INVENTORY” or “BALANCED INVENTORY” . If the Output Condition is “LOW INVENTORY” , we would have a Jobs to procure “Raw Material” and generate a Procurement Order. Likewise , we would have another Job to “Sell Inventory” , which would sell all Excess Inventory by raising a Sales Order , this job would return a Sales Order Number , alongwith the Inventory Count it is planning to Sell.

PARSE_CURR_INVENTORY Jobs :
We would have 4 Jobs which would Parse the Current Inventory Log and would fetch the Inventory levels for each : MS SHEETS , MACHINE OIL , POLISH , ANGLE BRACKETS .
This would run a tac command on the Log and grep for the relevant Inventory each from the Current Inventory Log File , it would grep for the latest level of each Inventory and echo the same in the Joblog :
 
[root@EU-HWS-LNX242 unixda]# cat /home/unixda/PARSE_CURR_LOG_MACHINE_OIL.sh
#!/bin/sh
PARSE_CURR_LOG_MACHINE_OIL=`tac /home/unixda/production_factory.log | grep -m1 "Machine Oil" | cut -d" " -f3`
echo $PARSE_CURR_LOG_MACHINE_OIL

Likewise , we would have 4 Jobs created for Parsing Current Inventory Levels from Log and echoing the same in the Joblog :
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PARSE_CURR_LOG_MSSHEET
= USER      : unixda
= JCLFILE   : /home/unixda/PARSE_CURR_LOG_MSSHEET.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881481
= Wed 03/18/2020 08:40:13 CET
===============================================================
32355

===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 16
= Job Memory usage (kb) : 1532
= Wed 03/18/2020 08:40:13 CET
===============================================================
 

===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PARSE_CURR_LOG_MACHINE_OIL
= USER      : unixda
= JCLFILE   : /home/unixda/PARSE_CURR_LOG_MACHINE_OIL.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881484
= Wed 03/18/2020 08:40:13 CET
===============================================================
6497
 
===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 16
= Job Memory usage (kb) : 1532
= Wed 03/18/2020 08:40:13 CET
===============================================================
 
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PARSE_CURR_LOG_POLISH
= USER      : unixda
= JCLFILE   : /home/unixda/PARSE_CURR_LOG_POLISH.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881485
= Wed 03/18/2020 08:40:13 CET
===============================================================
1746
 
===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 16
= Job Memory usage (kb) : 1532
= Wed 03/18/2020 08:40:13 CET
===============================================================
 
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PARSE_CURR_LOG_ANGLE_BRACKETS
= USER      : unixda
= JCLFILE   : /home/unixda/PARSE_CURR_LOG_ANGLE_BRACKETS.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881483
= Wed 03/18/2020 08:40:13 CET
===============================================================
28658
 
===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 15
= Job Memory usage (kb) : 1532
= Wed 03/18/2020 08:40:13 CET
===============================================================
 


PARSE_MRP_LOG Jobs :
Likewise , we would also have Jobs which would Parse the MRP log and would fetch the Needed inventory levels for the week :
 
The Implementation would again run a tac command and grep on the Log for the Latest Inventory Levels from MRP Log :
[root@EU-HWS-LNX242 unixda]# cat PARSE_CURR_LOG_POLISH.sh
#!/bin/sh
PARSE_CURR_LOG_POLISH=`tac /home/unixda/production_factory.log | grep -m1 "Polish" | cut -d" " -f2`
echo $PARSE_CURR_LOG_POLISH
​We would have 4 such Jobs for all Inventory types which are of importance to the company like Polish , MS Sheets , Machine Oil , Angle Brackets :
The Joblogs of the 4 Jobs would be similar to the below as shown below , this would reflect the MRP Inventory Levels needed for the week for each of the Inventory items :
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PARSE_MRP_LOG_MSSHEET
= USER      : unixda
= JCLFILE   : /home/unixda/PARSE_MRP_LOG_MSSHEET.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881482
= Wed 03/18/2020 08:40:13 CET
===============================================================
5509

===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 17
= Job Memory usage (kb) : 1532
= Wed 03/18/2020 08:40:13 CET
===============================================================
 
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PARSE_MRP_LOG_ANGLE_BRACKETS
= USER      : unixda
= JCLFILE   : /home/unixda/PARSE_MRP_LOG_ANGLE_BRACKETS.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881486
= Wed 03/18/2020 08:40:13 CET
===============================================================
25562
 
===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 17
= Job Memory usage (kb) : 1532
= Wed 03/18/2020 08:40:13 CET
===============================================================
 
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PARSE_MRP_LOG_MACHINE_OIL
= USER      : unixda
= JCLFILE   : /home/unixda/PARSE_MRP_LOG_MACHINE_OIL.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881487
= Wed 03/18/2020 08:40:13 CET
===============================================================
21137

===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 17
= Job Memory usage (kb) : 1528
= Wed 03/18/2020 08:40:13 CET
===============================================================
 
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PARSE_MRP_LOG_POLISH
= USER      : unixda
= JCLFILE   : /home/unixda/PARSE_MRP_LOG_POLISH.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881488
= Wed 03/18/2020 08:40:13 CET
===============================================================
12581
 
===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 17
= Job Memory usage (kb) : 1532
= Wed 03/18/2020 08:40:13 CET
===============================================================
 

Current INVENTORY Publish Jobs :
 
The Next Set of Jobs would publish the Current Inventory Levels from the Parse Current Inventory Jobs , so for each of the Inventory Items , we would have a corresponding Job which would fetch the Joblog Output from  Parse Current Inventory Job and would Store in a Workload Scheduler variable as follows , this job would be an executable Job Type and would use the Jobprop utility of Workload Scheduler to Publish the Current Inventory levels using Workload Scheduler variables :

​The Jobprop utility of Workload Scheduler would use the format {job:JOBNAME.stdlist} to fetch the Joblog of the PARSE Current Inventory Job and would pass it to a Workload Scheduler variable .
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PUBLISH_CURR_MSSHEETS
= TASK      : <?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdl" xmlns:jsdle="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdle" name="executable">
  <jsdl:variables>
    <jsdl:stringVariable name="tws.jobstream.name">INVENTORY_PLNG</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.jobstream.id">0AAAAAAAAAAAAC4H</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.name">PUBLISH_CURR_MSSHEETS</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.workstation">MASTER_DA</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.iawstz">202003180839</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.promoted">NO</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.resourcesForPromoted">10</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.num">632881491</jsdl:stringVariable>
  </jsdl:variables>
  <jsdl:application name="executable">
    <jsdle:executable interactive="false">
            <jsdle:script>#!/bin/sh
set -xv
cd /opt/wauser
. ./twa_env.sh
jobprop CURR_MSSHEETS ${job:PARSE_CURR_LOG_MSSHEET.stdlist}
 
</jsdle:script>
        </jsdle:executable>
  </jsdl:application>

  <jsdl:resources>
    <jsdl:orderedCandidatedWorkstations>
      <jsdl:workstation>11C4F5D45E1011EAAB724BB1022D6B3F</jsdl:workstation>
    </jsdl:orderedCandidatedWorkstations>
  </jsdl:resources>
</jsdl:jobDefinition>
= TWSRCMAP  :
= AGENT     : MASTER_DA
= Job Number: 632881491
= Wed 03/18/2020 08:40:18 CET
===============================================================
cd /opt/wauser
+ cd /opt/wauser
. ./twa_env.sh
+ . ./twa_env.sh
#!/bin/sh

#############################################################################

# Licensed Materials - Property of IBM* and HCL**  
# 5698-WSH   
# (c) Copyright IBM Corp. 1998, 2016 All rights reserved.
# (c) Copyright HCL Technologies Ltd. 2016 All rights reserved.
# * Trademark of International Business Machines
# ** Trademark of HCL Technologies Limited
############################################################################# 
#
# This script for UNIX sets the environment for using HCL Workload
# Automation
#
# Launch it in your session, within the dir where it is located,
# with a preceding dot
#
#########################################################################

if [ -f /opt/wauser/TWS/../TDWB/bin/tdwb_env.sh ]

then
        CURRENT_PATH=`pwd`
    cd /opt/wauser/TWS/../TDWB/bin
    . ./tdwb_env.sh
    cd $CURRENT_PATH
elif [ -f /opt/wauser/TWS/TDWB_CLI/bin/tdwb_env.sh ]
then
        CURRENT_PATH=`pwd`
    cd /opt/wauser/TWS/TDWB_CLI/bin
    . ./tdwb_env.sh
    cd $CURRENT_PATH
else
    PATH=/opt/wauser/wastools:$PATH
    export PATH
fi
++ '[' -f /opt/wauser/TWS/../TDWB/bin/tdwb_env.sh ']'
pwd
+++ pwd
++ CURRENT_PATH=/opt/wauser
++ cd /opt/wauser/TWS/../TDWB/bin
++ . ./tdwb_env.sh
#!/bin/sh
####################################################################

# Licensed Materials - Property of IBM and HCL

# Restricted Materials of IBM and HCL
# 5698-WSH
# (C) Copyright IBM Corp. 2009-2016 All Rights Reserved.
# (C) Copyright HCL Technologies Ltd. 2016 All Rights Reserved.
# US Government Users Restricted Rights - Use, duplication or
# disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
###################################################################
 
if [ ! -f "/opt/wauser/../.noecho" ]
then
    echo "Setting CLI environment variables...."
fi
+++ '[' '!' -f /opt/wauser/../.noecho ']'
+++ echo 'Setting CLI environment variables....'
Setting CLI environment variables....
 
#DB2_HOME=$(db2_home)
#export DB2_HOME
#ORACLE_HOME=$(oracle_home)
#export ORACLE_HOME
#TODO informix IDS_JARS=/opt/wauser/dbtools/ids/jars/db2jcc.jar:/opt/wauser/../TDWB/dbtools/ids/jars/db2jcc.jar
#export IDS_JARS

WLP_USER_DIR=/opt/wauser/usr

+++ WLP_USER_DIR=/opt/wauser/usr
TDWB_HOME=/opt/wauser/TDWB
+++ TDWB_HOME=/opt/wauser/TDWB
export TDWB_HOME
+++ export TDWB_HOME
DB_DRIVER_PATH=/home/db2inst2/sqllib/java
+++ DB_DRIVER_PATH=/home/db2inst2/sqllib/java
export DB_DRIVER_PATH
+++ export DB_DRIVER_PATH
JAVA_BIN=/opt/wauser/TWS/JavaExt/jre/jre/bin
+++ JAVA_BIN=/opt/wauser/TWS/JavaExt/jre/jre/bin
export JAVA_BIN
+++ export JAVA_BIN
DATA_DIR=/opt/wauser/TWSDATA
+++ DATA_DIR=/opt/wauser/TWSDATA
export DATA_DIR
+++ export DATA_DIR
#/opt/wa/server_wauser/usr/servers/engineServer/resources/lib/cars
CARS_DIR=$WLP_USER_DIR/servers/engineServer/resources/lib/cars
+++ CARS_DIR=/opt/wauser/usr/servers/engineServer/resources/lib/cars
export CARS_DIR
+++ export CARS_DIR
 
EMF_LIB=$WLP_USER_DIR/servers/engineServer/resources/lib/emf
+++ EMF_LIB=/opt/wauser/usr/servers/engineServer/resources/lib/emf
export EMF_LIB
+++ export EMF_LIB
 
CLASSPATH=$DATA_DIR/broker/config:$TDWB_HOME/lib/*:$DB_DRIVER_PATH/*:$CARS_DIR/events-client.jar:$CARS_DIR/IBMCARSEmitter.jar:$EMF_LIB/*

+++ CLASSPATH='/opt/wauser/TWSDATA/broker/config:/opt/wauser/TDWB/lib/*:/home/db2inst2/sqllib/java/*:/opt/wauser/usr/servers/engineServer/resources/lib/cars/events-client.jar:/opt/wauser/usr/servers/engineServer/resources/lib/cars/IBMCARSEmitter.jar:/opt/wauser/usr/servers/engineServer/resources/lib/emf/*'
PATH=$PATH:$TDWB_HOME/bin
+++ 
PATH=/opt/wauser/TWSDATA/ITA/cpa/cs:/opt/wauser/TWS/ITA/cpa/ita:/sbin:/usr/sbin:/usr/bin:/opt/wauser/TWS/ITA/cpa/ita:/sbin:/usr/sbin:/usr/bin:/opt/wauser/TWS:/opt/wauser/TWS/bin:/opt/wauser/TWS/xtrace:/opt/wauser/TWS/../appservertools:/opt/wauser/TWS/ITA/cpa/ita:/usr/Tivoli/TWS/GSKit64/8/lib64:/usr/Tivoli/TWS/GSKit32/8/lib/../bin:/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64/../bin:/opt/wauser/TWS/OpenSSL32/1.0.0/lib/../bin:/opt/wauser/TWS/CLI/bin:/home/testhwa/MDM_package/TWS/LINUX_X86_64/Tivoli_Eclipse_LINUX_X86_64/TWS/JavaExt/jre/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/bin:/usr/bin:/opt/wauser/TDWB/bin

export PATH
+++ export PATH

export CLASSPATH

+++ export CLASSPATH
++ cd /opt/wauser
 
if [ -f /opt/wauser/TWS/tws_env.sh ]
then
        CURRENT_PATH=`pwd`
    cd /opt/wauser/TWS
    . ./tws_env.sh
    cd $CURRENT_PATH
else
    PATH=/opt/wauser/wastools:$PATH
    export PATH
fi
++ '[' -f /opt/wauser/TWS/tws_env.sh ']'
pwd
+++ pwd
++ CURRENT_PATH=/opt/wauser
++ cd /opt/wauser/TWS
++ . ./tws_env.sh
#!/bin/sh

#############################################################################

# Licensed Materials - Property of IBM* and HCL**  
# 5698-WSH   
# (c) Copyright IBM Corp. 1998, 2016 All rights reserved.
# (c) Copyright HCL Technologies Ltd. 2016, 2017 All rights reserved.
# * Trademark of International Business Machines
# ** Trademark of HCL Technologies Limited
############################################################################# 
#
# This script for UNIX sets the environment for using HCL Workload
# Scheduler
#
# Launch it in your session, within the dir where it is located,
# with a preceding dot
#
#########################################################################

#Set GSKit path

 
GSKIT_VER=8
+++ GSKIT_VER=8
 
if [ -d /opt/wauser/TWS/GSKit64/$GSKIT_VER -o  -d /opt/wauser/TWS/GSKit32/$GSKIT_VER ]
then
GSKIT_PATH=/opt/wauser/TWS/GSKit64/$GSKIT_VER/lib64:/opt/wauser/TWS/GSKit32/$GSKIT_VER/lib
else
GSKIT_PATH=/usr/Tivoli/TWS/GSKit64/$GSKIT_VER/lib64:/usr/Tivoli/TWS/GSKit32/$GSKIT_VER/lib
fi
+++ '[' -d /opt/wauser/TWS/GSKit64/8 -o -d /opt/wauser/TWS/GSKit32/8 ']'
+++ GSKIT_PATH=/usr/Tivoli/TWS/GSKit64/8/lib64:/usr/Tivoli/TWS/GSKit32/8/lib
 
OPENSSL_VER=1.0.0
+++ OPENSSL_VER=1.0.0
if [ -d /usr/Tivoli/TWS/OpenSSL64/$OPENSSL_VER ]
then
        OPENSSL64_PATH=/usr/Tivoli/TWS/OpenSSL64/$OPENSSL_VER/lib64
else
        OPENSSL64_PATH=/opt/wauser/TWS/OpenSSL64/$OPENSSL_VER/lib64
fi
+++ '[' -d /usr/Tivoli/TWS/OpenSSL64/1.0.0 ']'
+++ OPENSSL64_PATH=/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64
if [ -d /usr/Tivoli/TWS/OpenSSL32/$OPENSSL_VER ]
then
        OPENSSL32_PATH=/usr/Tivoli/TWS/OpenSSL32/$OPENSSL_VER/lib
else
        OPENSSL32_PATH=/opt/wauser/TWS/OpenSSL32/$OPENSSL_VER/lib
fi
+++ '[' -d /usr/Tivoli/TWS/OpenSSL32/1.0.0 ']'
+++ OPENSSL32_PATH=/opt/wauser/TWS/OpenSSL32/1.0.0/lib
OPENSSL_LIBPATH=$OPENSSL64_PATH:$OPENSSL32_PATH
+++ OPENSSL_LIBPATH=/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64:/opt/wauser/TWS/OpenSSL32/1.0.0/lib
OPENSSL_PATH=$OPENSSL64_PATH/../bin:$OPENSSL32_PATH/../bin
+++ OPENSSL_PATH=/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64/../bin:/opt/wauser/TWS/OpenSSL32/1.0.0/lib/../bin
 
UNISONWORK=/opt/wauser/TWSDATA
+++ UNISONWORK=/opt/wauser/TWSDATA
export UNISONWORK
+++ export UNISONWORK
UNISONHOME=/opt/wauser/TWS
+++ UNISONHOME=/opt/wauser/TWS
export UNISONHOME
+++ export UNISONHOME
 
OSName=`uname`
uname
++++ uname
+++ OSName=Linux
case $OSName in
        AIX|OS400)
               TWS_TISDIR=$UNISONHOME;
               export TWS_TISDIR
               TISDIR=$UNISONHOME
               export TISDIR                        
               AIX_VERSION=`uname -v`
 
#Allow to use the GSKIt and OpenSSL command line
               if [ "$AIX_VERSION" = "5" -o "$AIX_VERSION" = "4" -o "$OSName" = "OS400" ]
               then
                       LIBPATH=$UNISONHOME/bin:$LIBPATH:.:$GSKIT_PATH:$OPENSSL_LIBPATH:$UNISONHOME/ITA/cpa/ita/lib:$UNISONHOME/CLI/bin
                       export LIBPATH
               else
                       LIBPATH=$UNISONHOME/bin:$LIBPATH:$GSKIT_PATH:$OPENSSL_LIBPATH:.:$UNISONHOME/ITA/cpa/ita/lib:$UNISONHOME/CLI/bin
                       export LIBPATH
               fi
 
                PATH=$UNISONHOME:$UNISONHOME/bin:$UNISONHOME/xtrace:$UNISONHOME/../appservertools:$UNISONHOME/ITA/cpa/ita:$OPENSSL_PATH:$UNISONHOME/CLI/bin:$PATH;
               export PATH
 
               JAVA_HOME=$UNISONHOME/JavaExt/jre/jre;
               export JAVA_HOME
               ;;
 
        Linux)
               export TISDIR=$UNISONHOME     
               export TWS_TISDIR=$UNISONHOME;
 
#Allow to use the GSKIt and OpenSSL command line
               uname -m | grep i | grep 86 > /dev/null
               if [ $? -eq 0 ]; then
                       GSKIT_LIBPATH_SUFFIX=lib
                       GSKIT_PATH=/usr/Tivoli/TWS/GSKit32/8/lib
                       OPENSSL_LIBPATH=$OPENSSL32_PATH
                       OPENSSL_PATH=$OPENSSL32_PATH/../bin
               else
                       GSKIT_LIBPATH_SUFFIX=lib64
               fi
               export LD_LIBRARY_PATH=$UNISONHOME/bin:$GSKIT_PATH:$OPENSSL_LIBPATH:$LD_LIBRARY_PATH:.:$UNISONHOME/ITA/cpa/ita/lib:$UNISONHOME/CLI/bin
               export PATH=$UNISONHOME:$UNISONHOME/bin:$UNISONHOME/xtrace:$UNISONHOME/../appservertools:$UNISONHOME/ITA/cpa/ita:$GSKIT_PATH/../bin:$OPENSSL_PATH:$UNISONHOME/CLI/bin:$PATH;
               export JAVA_HOME=$UNISONHOME/JavaExt/jre/jre;
 
               ;;
 
        SunOS)
               TWS_TISDIR=$UNISONHOME
               export TWS_TISDIR             
               TISDIR=$UNISONHOME
               export TISDIR         
           LD_LIBRARY_PATH=$UNISONHOME/bin:$GSKIT_PATH/:$OPENSSL_LIBPATH/:$LD_LIBRARY_PATH:.:$UNISONHOME/ITA/cpa/ita/lib:$UNISONHOME/CLI/bin
               export LD_LIBRARY_PATH
                PATH=$UNISONHOME:$UNISONHOME/bin:$UNISONHOME/xtrace:$UNISONHOME/../appservertools:$GSKIT_PATH/../bin:$UNISONHOME/ITA/cpa/ita:$OPENSSL_PATH:$UNISONHOME/CLI/bin:$PATH
               export PATH
               JAVA_HOME=$UNISONHOME/JavaExt/jre/jre
               export JAVA_HOME
               ;;
 
        HP-UX)
        GSKIT_LIBPATH_SUFFIX=lib
        uname -m | grep  ia64  > /dev/null
        if [ $? -ne 0 ]; then
             GSKIT_PATH=/usr/Tivoli/TWS/GSKit32/8/lib
             OPENSSL_LIBPATH=$OPENSSL32_PATH
                       OPENSSL_PATH=$OPENSSL32_PATH/../bin
        else
             GSKIT_PATH=/usr/Tivoli/TWS/GSKit64/8/lib64
             OPENSSL_LIBPATH=$OPENSSL64_PATH
                       OPENSSL_PATH=$OPENSSL64_PATH/../bin
        fi
 
               TWS_TISDIR=$UNISONHOME
               export TWS_TISDIR
               TISDIR=$UNISONHOME
               export TISDIR                        
                #for security reason HP could not allow to append the SHLIB_PATH if the script is launch with source operator
                IS_ROOT=`id 2>/dev/null | grep "uid=0"`
                SHLIB_SET=`env | grep "SHLIB_PATH"`
                if [ ! -z "$IS_ROOT" -a ! -z "$SHLIB_SET" ]
                then
                    SHLIB_PATH=$UNISONHOME/bin:$GSKIT_PATH:$OPENSSL_LIBPATH:.:$SHLIB_PATH:$UNISONHOME/ITA/cpa/ita/lib:$UNISONHOME/CLI/bin
                                       LD_LIBRARY_PATH=$UNISONHOME/bin:$GSKIT_PATH:$OPENSSL_LIBPATH:.:$SHLIB_PATH:$UNISONHOME/ITA/cpa/ita/lib:$UNISONHOME/CLI/bin
                else
                    SHLIB_PATH=$UNISONHOME/bin:$GSKIT_PATH:$OPENSSL_LIBPATH:.:$UNISONHOME/ITA/cpa/ita/lib:$UNISONHOME/CLI/bin
                                       LD_LIBRARY_PATH=$UNISONHOME/bin:$GSKIT_PATH:$OPENSSL_LIBPATH:.:$UNISONHOME/ITA/cpa/ita/lib:$UNISONHOME/CLI/bin
                fi
                export SHLIB_PATH
                               export LD_LIBRARY_PATH
                #Allow to use the GSKIt command line
                PATH=$UNISONHOME:$UNISONHOME/bin:$UNISONHOME/xtrace:$UNISONHOME/../appservertools:$GSKIT_PATH/../bin:$UNISONHOME/ITA/cpa/ita:$OPENSSL_PATH:$UNISONHOME/CLI/bin:$PATH
               export PATH
               JAVA_HOME=$UNISONHOME/JavaExt/jre/jre
               export JAVA_HOME
               ;;
 
        *)
               TWS_TISDIR=$UNISONHOME;export TWS_TISDIR
               TISDIR=$UNISONHOME;export TISDIR             
                PATH=$UNISONHOME:$UNISONHOME/bin:$UNISONHOME/xtrace:$UNISONHOME/../appservertools:$GSKIT_PATH/../bin:$GSKIT_PATH:$UNISONHOME/ITA/cpa/ita:$OPENSSL_PATH:$OPENSSL_LIBPATH:$UNISONHOME/CLI/bin:$PATH;export PATH
               JAVA_HOME=$UNISONHOME/JavaExt/jre/jre;export JAVA_HOME
                LD_LIBRARY_PATH=$GSKIT_PATH:$OPENSSL_LIBPATH:.:$LD_LIBRARY_PATH:$UNISONHOME/CLI/bin:$UNISONHOME/ITA/cpa/ita/lib
               SHLIB_PATH=$GSKIT_PATH:$OPENSSL_LIBPATH:.:$SHLIB_PATH:$UNISONHOME/ITA/cpa/ita/lib
               LIBPATH=$GSKIT_PATH:$OPENSSL_LIBPATH:$UNISONHOME/ITA/cpa/ita/lib:$LIBPATH
               export SHLIB_PATH
               export LD_LIBRARY_PATH
               export LIBPATH
               ;;
esac
+++ case $OSName in
+++ export TISDIR=/opt/wauser/TWS
+++ TISDIR=/opt/wauser/TWS
+++ export TWS_TISDIR=/opt/wauser/TWS
+++ TWS_TISDIR=/opt/wauser/TWS
+++ uname -m
+++ grep i
+++ grep 86
+++ '[' 1 -eq 0 ']'
+++ GSKIT_LIBPATH_SUFFIX=lib64
+++ export LD_LIBRARY_PATH=/opt/wauser/TWS/bin:/usr/Tivoli/TWS/GSKit64/8/lib64:/usr/Tivoli/TWS/GSKit32/8/lib:/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64:/opt/wauser/TWS/OpenSSL32/1.0.0/lib::.:/opt/wauser/TWS/ITA/cpa/ita/lib:/opt/wauser/TWS/CLI/bin
+++ LD_LIBRARY_PATH=/opt/wauser/TWS/bin:/usr/Tivoli/TWS/GSKit64/8/lib64:/usr/Tivoli/TWS/GSKit32/8/lib:/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64:/opt/wauser/TWS/OpenSSL32/1.0.0/lib::.:/opt/wauser/TWS/ITA/cpa/ita/lib:/opt/wauser/TWS/CLI/bin
+++ export PATH=/opt/wauser/TWS:/opt/wauser/TWS/bin:/opt/wauser/TWS/xtrace:/opt/wauser/TWS/../appservertools:/opt/wauser/TWS/ITA/cpa/ita:/usr/Tivoli/TWS/GSKit64/8/lib64:/usr/Tivoli/TWS/GSKit32/8/lib/../bin:/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64/../bin:/opt/wauser/TWS/OpenSSL32/1.0.0/lib/../bin:/opt/wauser/TWS/CLI/bin:/opt/wauser/TWSDATA/ITA/cpa/cs:/opt/wauser/TWS/ITA/cpa/ita:/sbin:/usr/sbin:/usr/bin:/opt/wauser/TWS/ITA/cpa/ita:/sbin:/usr/sbin:/usr/bin:/opt/wauser/TWS:/opt/wauser/TWS/bin:/opt/wauser/TWS/xtrace:/opt/wauser/TWS/../appservertools:/opt/wauser/TWS/ITA/cpa/ita:/usr/Tivoli/TWS/GSKit64/8/lib64:/usr/Tivoli/TWS/GSKit32/8/lib/../bin:/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64/../bin:/opt/wauser/TWS/OpenSSL32/1.0.0/lib/../bin:/opt/wauser/TWS/CLI/bin:/home/testhwa/MDM_package/TWS/LINUX_X86_64/Tivoli_Eclipse_LINUX_X86_64/TWS/JavaExt/jre/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/bin:/usr/bin:/opt/wauser/TDWB/bin
+++ PATH=/opt/wauser/TWS:/opt/wauser/TWS/bin:/opt/wauser/TWS/xtrace:/opt/wauser/TWS/../appservertools:/opt/wauser/TWS/ITA/cpa/ita:/usr/Tivoli/TWS/GSKit64/8/lib64:/usr/Tivoli/TWS/GSKit32/8/lib/../bin:/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64/../bin:/opt/wauser/TWS/OpenSSL32/1.0.0/lib/../bin:/opt/wauser/TWS/CLI/bin:/opt/wauser/TWSDATA/ITA/cpa/cs:/opt/wauser/TWS/ITA/cpa/ita:/sbin:/usr/sbin:/usr/bin:/opt/wauser/TWS/ITA/cpa/ita:/sbin:/usr/sbin:/usr/bin:/opt/wauser/TWS:/opt/wauser/TWS/bin:/opt/wauser/TWS/xtrace:/opt/wauser/TWS/../appservertools:/opt/wauser/TWS/ITA/cpa/ita:/usr/Tivoli/TWS/GSKit64/8/lib64:/usr/Tivoli/TWS/GSKit32/8/lib/../bin:/usr/Tivoli/TWS/OpenSSL64/1.0.0/lib64/../bin:/opt/wauser/TWS/OpenSSL32/1.0.0/lib/../bin:/opt/wauser/TWS/CLI/bin:/home/testhwa/MDM_package/TWS/LINUX_X86_64/Tivoli_Eclipse_LINUX_X86_64/TWS/JavaExt/jre/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/bin:/usr/bin:/opt/wauser/TDWB/bin
+++ export JAVA_HOME=/opt/wauser/TWS/JavaExt/jre/jre
+++ JAVA_HOME=/opt/wauser/TWS/JavaExt/jre/jre
 
CONFIGURE_DATASOURCE_PARAMETER_DATA_DIR=/opt/wauser/TWSDATA
+++ CONFIGURE_DATASOURCE_PARAMETER_DATA_DIR=/opt/wauser/TWSDATA
 
if [ ! -z "$UNISONWORK" ]
then
    ITA_CFG=$UNISONWORK/ITA/cpa/ita/ita.ini
else
    ITA_CFG=$UNISONHOME/ITA/cpa/ita/ita.ini
fi
+++ '[' '!' -z /opt/wauser/TWSDATA ']'
+++ ITA_CFG=/opt/wauser/TWSDATA/ITA/cpa/ita/ita.ini
export ITA_CFG
+++ export ITA_CFG
 
MAESTRO_OUTPUT_STYLE=LONG
+++ MAESTRO_OUTPUT_STYLE=LONG
export MAESTRO_OUTPUT_STYLE
+++ export MAESTRO_OUTPUT_STYLE
MAESTROLINES=-1
+++ MAESTROLINES=-1
export MAESTROLINES
+++ export MAESTROLINES
 
if [ ! -f "/opt/wauser/.noecho" ]
then
    echo HCL Workload Scheduler Environment Successfully Set !!!
fi
+++ '[' '!' -f /opt/wauser/.noecho ']'
+++ echo HCL Workload Scheduler Environment Successfully Set '!!!'
HCL Workload Scheduler Environment Successfully Set !!!
 
++ cd /opt/wauser
 
if [ ! -f "/opt/wauser/.noecho" ]
then
    echo HCL Workload Automation Environment Successfully Set !!!
fi
++ '[' '!' -f /opt/wauser/.noecho ']'
++ echo HCL Workload Automation Environment Successfully Set '!!!'
HCL Workload Automation Environment Successfully Set !!!
 
jobprop CURR_MSSHEETS 32355
+ jobprop CURR_MSSHEETS 32355
 
===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 56
= Job Memory usage (kb) : 6976
= Wed 03/18/2020 08:40:18 CET
=============================================================

Publish MRP Inventory Jobs :

These set of Jobs would parse the Joblog of Parse MRP Inventory Job and would fetch the Joblog Output and store them in a Workload Scheduler variable , each Joblog parsed would reflect the Inventory Level of each Item like MACHINE OIL , POLISH , MS STEEL , ANGLE BRACKETS :

===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PUBLISH_MRP_ANGLE_BRACKETS
= TASK      : <?xml version="1.0" encoding="UTF-8"?>
<jsdl:jobDefinition xmlns:jsdl="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdl" xmlns:jsdle="http://www.ibm.com/xmlns/prod/scheduling/1.0/jsdle" name="executable">
  <jsdl:variables>
    <jsdl:stringVariable name="tws.jobstream.name">INVENTORY_PLNG</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.jobstream.id">0AAAAAAAAAAAAC4H</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.name">PUBLISH_MRP_ANGLE_BRACKETS</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.workstation">MASTER_DA</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.iawstz">202003180839</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.promoted">NO</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.resourcesForPromoted">10</jsdl:stringVariable>
    <jsdl:stringVariable name="tws.job.num">632881493</jsdl:stringVariable>
  </jsdl:variables>
  <jsdl:application name="executable">
    <jsdle:executable interactive="false">
            <jsdle:script>#!/bin/sh
cd /opt/wauser
. ./twa_env.sh
jobprop MRP_ANG_BRK ${job:PARSE_MRP_LOG_ANGLE_BRACKETS.stdlist}
 
</jsdle:script>
        </jsdle:executable>
  </jsdl:application>
  <jsdl:resources>
    <jsdl:orderedCandidatedWorkstations>
      <jsdl:workstation>11C4F5D45E1011EAAB724BB1022D6B3F</jsdl:workstation>
    </jsdl:orderedCandidatedWorkstations>
  </jsdl:resources>
</jsdl:jobDefinition>
= TWSRCMAP  :
= AGENT     : MASTER_DA
= Job Number: 632881493
= Wed 03/18/2020 08:40:18 CET
===============================================================
Setting CLI environment variables....
HCL Workload Scheduler Environment Successfully Set !!!
HCL Workload Automation Environment Successfully Set !!!
 
===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 46
= Job Memory usage (kb) : 6976
= Wed 03/18/2020 08:40:18 CET
===============================================================
 
Comparison Jobs :
These 4 Jobs would compare the Workload Scheduler variable from Publish Current Inventory for each item and compare them with the Publish MRP Inventory for each Item and if the Current Inventory Level is more than the MRP Inventory Level , then it would compute the Excess Inventory level and would store the Difference in a Variable called EXCESS and would pass a Success Output Condition of EXCESS_INVENTORY and would return a RC of 15 .
It the MRP inventory Level is more than the current Inventory Level , then it would compute the Low Inventory level and would store the Difference in a Variable called NEEDS and would pass a Success Output Condition of NEEDS_INVENTORY and would return a RC of 5 .


Procure Raw Material Jobs  :
 
These set of jobs would run a Script to Procure Raw material , this would be a Custom Script of the Company and would additionally also use the “Needs” Variable from the COMPUTE Job while placing the Procurement Order with a known Seller.(with which the company has tie ups), this set of jobs would run if the condition passed is “LOW INVENTORY” in the Compute Job :
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PROCURE_RAW_MATERIAL_MACHINE_OIL
= USER      : unixda
= JCLFILE   : /home/unixda/procure_rawmaterial_machineoil.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881501
= Wed 03/18/2020 08:40:31 CET
===============================================================
Setting CLI environment variables....
HCL Workload Scheduler Environment Successfully Set !!!
HCL Workload Automation Environment Successfully Set !!!
Placing Order for MachineOil Qty :
Procurement Order: 11095 placed
 
===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 22
= Job Memory usage (kb) : 1548
= Wed 03/18/2020 08:40:31 CET
===============================================================
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].PROCURE_RAW_MATERIAL_POLISH
= USER      : unixda
= JCLFILE   : /home/unixda/procure_rawmaterial_polish.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881502
= Wed 03/18/2020 08:40:31 CET
===============================================================
Setting CLI environment variables....
HCL Workload Scheduler Environment Successfully Set !!!
HCL Workload Automation Environment Successfully Set !!!
Placing Order for Polish Ltrs :
Procurement Order: 27169 placed

===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 22
= Job Memory usage (kb) : 1552
= Wed 03/18/2020 08:40:31 CET
===============================================================
 
Sell Inventory Jobs :
 
These Jobs would place a Sales Order with Sales so as to pursue further to Sell all Excess Inventory , these would run a Custom Script , the jobs would also fetch the variable for EXCESS Variable from the Compute Job and would also return a Sales Order Number and would run only when the Condition on the Compute Job returns “EXCESS INVENTORY” :


===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].SELL_INVENTORY_MSSHEET
= USER      : unixda
= JCLFILE   : /home/unixda/sell_inventory_mssheet.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881504
= Wed 03/18/2020 08:40:31 CET
===============================================================
Setting CLI environment variables....
HCL Workload Scheduler Environment Successfully Set !!!
HCL Workload Automation Environment Successfully Set !!!
Sales Order: 18822 raised with Sales Team
 
in excess

===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 22
= Job Memory usage (kb) : 1548
= Wed 03/18/2020 08:40:31 CET
===============================================================
 
 
===============================================================
= JOB       : DAUNIX#INVENTORY_PLNG[(0839 03/18/20),(0AAAAAAAAAAAAC4H)].SELL_INVENTORY_ANGLE_BRACKET
= USER      : unixda
= JCLFILE   : /home/unixda/sell_inventory_anglebracket.sh
= TWSRCMAP  :
= AGENT     : DAUNIX
= Job Number: 632881503
= Wed 03/18/2020 08:40:31 CET
===============================================================
Setting CLI environment variables....
HCL Workload Scheduler Environment Successfully Set !!!
HCL Workload Automation Environment Successfully Set !!!
Sales Order: 2748 raised with Sales Team
 
in excess
 
===============================================================
= Exit Status           : 0
= Elapsed Time (hh:mm:ss) : 00:00:01
= Job CPU usage (ms) : 21
= Job Memory usage (kb) : 1548
= Wed 03/18/2020 08:40:31 CET
===============================================================
 
 
Balanced Inventory :
 
In case of balanced Inventory returned from Compute Jobs  , no actions are performed for the Corresponding Inventory Item .

Jobstream INVENTORY_PLNG :

The Jobstream INVENTORY_PLNG implements the use Case completely in the below way :
Job MATERIAL_REQ_PLANNING runs the MATERIAL REQ PLANNING .
Job CURR_INVENTORY fetches the Current Inventory Levels .
The Jobs PARSE_CURR_LOG* parse the Current Inventory Levels from Inventory log and output it in the joblog for each Inventory Item.
The Jobs PARSE_MRP_LOG* parse the MRP Inventory Levels from MRP Inventory log and output it in the joblog for each Inventory Item.
The Jobs PUBLISH_CURR* publish the Current Inventory Levels by fetching the Joblog through {job:JobName.stdlist} from the joblog of PARSE_CURR_LOG* Jobs.
The Jobs PUBLISH_MRP* publish the MRP Inventory Levels by fetching the Joblog through {job:JobName.stdlist} from the Joblog of PARSE_MRP_LOG* Jobs.
The Jobs COMP_* compute the difference between Inventory and store it in either NEEDS or EXCESS Variables through an if and jobprop utility. These also additionally pass a SUCC OUTPUT CONDITION of EXCESS_INVENTORY or LOW_INVENTORY depending on the Case and would also return RC=15 or RC=5 depending on the Scenario .

​So ,as you can see in this Case , for Angle Brackets : EXCESS_INVENTORY , so the SELL_INVENTORY Job has triggered , likewise for MS SHEET : EXCESS_INVENTORY , so the SELL_INVENTORY Job has triggered and for Machine Oil : LOW_INVENTORY , so the PROCURE_RAW_MATERIAL Job has triggered and for Polish : LOW_INVENTORY , so the PROCURE_RAW_MATERIAL Job has triggered , all other Conditional Dependency jobs have been suppressed (which are not needed for this case).
​So for any Business Process , using a combination of Variable Passing and Conditional Dependencies , we can Automate a Business Process using Workload Automation . Hope you enjoyed this blog .

Sriram V, Senior Tech Lead, HCL Technologies

I’ve been working with Workload Automation for the last 11 years in various capacities like IWS Administrator , SME , India-SME , later joined the Product team supporting Workload Automation on SaaS, recently moved to Tech Sales and Lab Services of Workload Automation. 

Learning new business models after a crisis: the role of automation

0
0

What is the role of Workload Automation in a disaster recovery mode? 

Forewords: coping with a crisis 
It is with an intense sense of modesty that we are approaching this topic, since this article is written in the midst of the most severe crisis for humanity our generation ever faced. 

Consequently, writing about how an IT solution can help recovering from - and coping with - a planetary health emergency is not an easy task at all.
We want to be sure we transmit our respect for the human aspects of the COVID-19 pandemic occurring these days, and at the same time listen to the voices that tell us how also the economic consequences of it will hurt humanity. We are well aware – and we even used automation to keep track of it - of the impact the current pandemic crisis is having on our daily lives. 

We feel that the correct adoption of proper IT disciplines and solutions will give a contribution, in times like these, when millions of people are locked-up at home and an economic crisis is being predicted by analysists. 

This is the reason why we are writing this article. 

Let’s begin reminding that crisis is a word that, despite its negative common interpretation, also carries positive messages. 
In any western language, crisis has the same etymology: it comes from the Latin crisis which in turns derives from the Greek krísis, which means “choice” or “decision”. 

In Chinese, crisis is composed by the concatenation of “danger” and “opportunity”, or according to other interpretation “danger” and “crucial moment”. 
Any of the above interpretations describes very well what our Society is facing this year, and we’ll analyze the role of Automation with respect. 

So, what is Automation already doing for us today? 
A very simple consideration would explain it all: 
This is the time when more than ever it is evident how our lives, and our idea of “normality”, rely on the correct functioning of unattended IT processes. 
Living in a scenario where physical proximity of human beings is dangerous, the only people allowed to go to work are those whose job is strictly necessary for the survival of the entire population. Anybody else stays home, smart-working if possible, suffering unemployment otherwise. 

Governments had to evaluate and identify which are such necessary jobs, and the list is longer and more complex than we have ever imagined. But it could have been quite longer if many activities weren’t automated. 

For the supply chain of our food, energy production, and so many other things, we rely on automated IT processes to keep conducting the life we assumed it’s normal.
 

Without automation, many more people would have to put at risk their health to perform those activities, and some activities would not be possible at all.
 

Automation changed from being a mean to reduce costs and increase revenue, to being also a discipline that allows a safe and sustainable day-by-day progression for all of us 

Companies are now striving to keep functioning in emergency mode, and those who are more prepared for automation are resisting better. 
 
These are two aspects of automation playing a positive role today: 
  1. Automation is allowing the provisioning of services needed for our welfare, especially in times of crisis. 
  2. Automation is reassuring the job of many people, by running unattended activities that support smart workers, and allowing non-strictly-necessary businesses to keep functioning. 

Last but not least, 
in a context in which systems are running unattended these days more than everbeing able to rely on a trustworthy platform is key. 

Administrators connected from home for example may not be 100% responsive as they used to, because of 
  • New IT issues (for example to grant remote access to all employees) may take most of their time  
  • Network connection from home may not be responsive enough to be immediately notified of problems and take immediate actions  

H
CL Workload Automation can help companies running their business unattended, with automatic recovery actions and eliminating the need of human intervention in most cases.  

And this is and will be true even after the crisis.
 

What automation can do for us 
right now and tomorrow 
 
If you are trying to figure out where your investments on automation will go once we enter the next phase of the emergency and business hopefully start accelerating again, consider the following aspects. 

You can accelerate restart by getting ready to automate more and better: how many processes do you currently automate? Which are your automation needs? From DevOps to IT tasks, passing for batch scheduling and business tasks, we know how complex your ecosystem can be. 

Using Workload Automation you can ensure governance on the whole process, while interconnecting these different flows and controlling them from one single platform. 

​At the 
moment all companies are experiencing a slow down on activities: it’s the right moment to work on the automation strategy and learn how to expand it. 
Last but not least, transform the lessons learned into business opportunitiesare there some areas which you could have automated and you didn’t?  Here comes your chance to change that. 

How HWA can concretely support recovering from the crisis 

So, in practice, we believe HCL Workload Automation can help companies recovering faster from this crisis under two aspects: 

​1. 
Everybody for sure discovered a more efficient and more effective way of doing business, either via smart-working or reorganizing processes.  
Companies as well will soon reorganize some processes -if not many- to be more effective. 
Process reorganization will be done taking advantage of Automation as much as possible, having a real hybrid end-to-end workload automation solution like HWA will provide a significant advantage. 

2. The economy need to get in shape ASAP. Companies make economy, so companies need to get financially profitable in the shortest time possible. 
HWA can be a key element to increase the speed of operation, maximize quality and lead to higher productivity at lower costs, to help businesses be back on track rapidly. 
Employees can be dedicated to productive and value-added activities, leaving machines to perform repetitive tasks. 
We could not avoid this terrible crisis. Let’s work on what we can control, in our case enabling the fastest recovery of your business 
Want to learn more about Workload Automation? Visit our website or drop us a line at cristiano.plini@hcl.com or emanuela.zacoone@hcl.com

AUTHORS BIO  
Cristiano Plini  , HWA Sales Specialist   

​Cristiano is part of the Sales Team of HCL Workload Automation. He has sound experience in R&D, consultancy  and sales of IT solutions, gained in +25 years working for worldwide enterprise industries in Europe, North America, and Latin America.
 
Emanuela Zaccone, HWA Product Manager 

15 years of experience in Digital Marketing and, since 2016, in Product Management. She has a background as a digital entrepreneur, with her latest Silicon Valley-based startup acquired in 2019. She strongly believes in a customer-centric and data-driven approach to product management. And she is now proud to rock the future of Automation at HCL. 

Workload Automation 9.5.0.2 is here: create an automation center of excellence to drive your digital transformation

0
0
​HCL and IBM are pleased to announce a new release for Workload Automation, version 9.5.0.2, with enhancements that expand the innovations delivered with version 9.5 and enable the orchestration of new scenarios, particularly for datacenter automation.
 
A new website, the Automation Hub, has been launched to allow our customers to easily find all the integrations available to orchestrate their business-critical processes.
Our Line of Business users are now fully empowered with all database and plan objects available in workflow folders and the possibility to delegate the business control on the folders.
 
New features are also available for the Z engine, as the integration with ZoweTM or the automatic deployment of new plug-ins, because for HCL and IBM mainframe matters.
 
Take a look at this video for an overview of all the features!

Orchestrate your Business-Critical processes

Welcome to Your Automation Hub
Workload Automation customers have now a place where they can find the collection of all the out-of-the-box integrations provided by HCL and IBM and all the integrations created by our business partners. 
Moreover, if you do not find the integrations that you are looking for, you can make a request for it or create it yourself and share your integration with the community: the new Workload Automation Lutist Development Kit can be downloaded.
All the integrations with job type plug-ins are available for both distributed and mainframe customers, require either dynamic or z-centric agents and the Dynamic Workload Console for job definition.
The website includes descriptions of scenarios and use cases for all the integrations available.

The new integrations available on the Automation Hub are job type plug-ins for AnsibleChef BootstrapChef RunlistKubernetes and UrbanCode Deploy applications and a plug-in for Zowe CLI, an open source mainframe tool.
Learn more about the new Data Center Automation integrations watching the video at this link: https://youtu.be/1u-DXrKhrxI

Delegate Business Control

Version 9.5.0.2 fully empowers your Line of Business users by extending the support for the workflow folders to all the scheduling objects. Now all your workflows definitions can be organized in folders to represent in your environment any category that makes sense for your business operations.
Folders also simplify security access. You can associate access control lists to individual folders to restrict the rights to which folder any single user or group can access. You can also delegate security management of a specific folder and its sub-folders to other users.
Dashboards and monitoring views can be filtered on workflow folders to enable the creation of fully isolated multi-tenant environments for your Line of Business users.

The video at this link https://youtu.be/sDmi3QPFwhw on the Workload Automation YouTube channel tells you more about this great new functionality.
 
Automatic Failover and active-active high availability 

Workload Automation high availability is always been supported through the configuration of back up engines. Now there is much more, with the new the automatic failover and high availability features it is possible to ensure continuous operation. 
For the Automatic Failover you can configure one or more backup engines so that when a backup detects that the active master becomes unavailable, it triggers a long-term switchmgr operation to itself.
You can define potential backups in a list adding preferential backups at the top of the list. The backup engines monitor the behavior of the master domain manager to detect anomalous behavior.
It is also now possible to Implement active-active high availability between the Dynamic Workload Console and the Master Domain Manager.You can use a load balancer between the Dynamic Workload Console servers and the master domain manager so that in the event the master needs to switch to a backup, the switch is transparent to console users.Configure the master domain manager and backup master domain managers behind a second load balancer so that the workload is balanced across all backup master domain managers and the master domain manager. Load balancing distributes workload requests across all configured nodes to avoid any single node from being overloaded and avoids a single point of failure.

Workload Automation deployment on containers – Red Hat® OpenShift®

Red Hat® OpenShift® is one of the most popular Kubernetes platforms and now all Workload Automation containers can be deployed on Red Hat® OpenShift® V4.x. Two separate containers are provided, containing either the Workload Automation agent only (both dynamic for distributed and z-centric for mainframe customers), or the Workload Automation server, the console and the agent in a single container.
 
Event rules new user experience
 
The Dynamic Workload Console v9.5.0.2 has a brand-new experience that optimizes the creation and editing of event rules.
A modernized user experience that makes the definition of event rules easier, intuitive, and well-organized, thanks to the structure based on the new workflow folders. Not only the visual usability has been enhanced, but also the whole management of the event rule definitions, their properties and the interactions with them. 
The new contextual help can guide you through the new interface and its fields. On the home page, you can find the page-related topics, and if they are not sufficient, you can search for what you need by using the search bar. Furthermore, by clicking on a field, the help automatically updates itself and shows you the information about the selected field.
Mainframe matters and evolves 

This section is dedicated to enhancements that are specific for the Workload Automation solution with a mainframe engine. 
 
Integrating with Zowe™
Zowe is an open-source project that enables you to interact with z/OS through modern interfaces. With the WA plug-in for the Zowe command line you can now issue Workload Automation commands to remotely control your workload: monitor and modify jobs, jostreams, and resources, and issue WAPL commands. 
You can access the WA API through the API Mediation Layer (API ML) or connect the plug-in directly to the WA API. 
Watch the video Open Workload Automation to the modern era - Meet Zowe at this link https://youtu.be/Vk-yDsBWhP0 or learn more about the integration here.
 
Automatic deployment for new integrations plugins
A new feature is available to enable the automatic deployment of new plug-ins or new plug-in versions to z-centric agents. If you find a plug-in through the Automation Hub you just need to download it to a zConnector folder, a component of the console server. At the first job submission for that job type the z-centric agent will seamlessly manage it by requesting the latest plug-in version to the zConnector.
 
Control the system where a z/OS job is to be run
The system where you submit a z/OS job does not always coincide with the system where the job will be executed. JES could decide to execute the job to another system, where the required resources are available. As a consequence, the checks made by the controller on the system where the job is submitted do not guarantee that the same conditions are found in the system where the job will be executed.
It is now possible to control the system where a z/OS job is to be run and the job class that will be used for the submission by setting the SYSAFF and JOB CLASS JCL keywords through the definition of new parameters in the JTOPTS initialization statement. 
 
Send an email if operation ended in error or is late
It is possible to configure Workload Automation on Z to send an email to a specific recipient or list of recipients when an alert condition occurs. The list of alert conditions includes long running jobs, jobs ending in error, late jobs, jobs waiting for special resources and more. You can customize alert conditions for multiple jobs using filtering selection criteria and also the email subject and text.
 
Connect to DB2 for z/OS 
You can now install the Dynamic Workload Console on WebSphere Application Server for z/OS Liberty and use DB2 for z/OS for the console configuration data. This enables Z customers to a have full centralized control on the mainframe for their Workload Automation components without need of distributed servers or Linux on IBM Z systems to deploy the Web console.
 
Install Workload Automation 9.5.0.2 and get ready to orchestrate your IT ecosystem!
 
You learned here about some of the new exciting enhancements that HCL and IBM just delivered. 
To get a full list of all the enhancements and customers’ RFEs implemented please refer to the product documentation in the FP2 and SPE2 in the Summary of Enhancements section.
 
The new release enables many new automation scenarios and is a step into the future of Automation, so don’t hesitate and plan your next move!

Marco Cardelli - Workload Automation Product Manager
Marco has been working with IBM since 1990 and on IBM Workload Scheduler for z/OS since 1995. He started on IBM Workload Scheduler for z/OS as a L3 support specialist and then joined the development team as chief designer starting with IBM Workload Scheduler for z/OS 8.3. In 2015 he became architect of the IBM Workload Scheduler for z/OS product. In September 2016, as part of the new partnership agreement between IBM and HCL, he moved to the new HCL Products & Platforms division. Since January 2017 he left the development team and joined the Workload Automation Offering team.
Emanuela Zaccone, Workload Automation Product Manager 
An experienced product manager with a strong digital marketing and digital entrepreneurship background. As Digital Entrepreneur she founded TOK.tv in 2012, reaching more than 40 million sports fans in the world before selling the company to the Minerva Networks Group in 2019. In the same year, she granted the inventor title by patenting social TV.  She completed a PhD between the universities of Bologna (Italy) and Nottingham (UK).   

Workload Automation & SAP: better together

0
0
When we talk about operational efficiency and orchestration of IT and Business workflows, workload automation is Enterprise Resource Planning’s best friend.  
We see this every single day with our customers, especially those using SAP to run their critical business processes.  

By leveraging the power of Workload Automation and making the most out of its integration with SAP, we have seen our customers move their businesses to the next level of innovation and maturity.
 

​Want to learn how? Let’s 
start from the beginning. 
WORKLOAD AUTOMATION 101 

From a high-level, wherever there is a process to automate, there is the need for Workload Automation to manage it.  
 
  • Workload Automation is the perfect platform to schedule, run and manage digital business processes from end to end, automatically.  And thanks to Workload Automation, business processing can take place without human intervention.  
  • ​​The platform boosts a customer’s business by seamlessly orchestrating complex workflows across multiple platforms and applications. It acts as a meta-orchestrator for continuous automation, leveraging containerization and an intuitive user interface, while offering the lowest total cost of ownership (TCO) on the market. Plus, it provides a single point of control for application developers, IT administrators and operators, providing them both autonomy and precise governance through centralized access control that includes auditing and versioning.  
  • The solution is available on mainframe, virtualized (on-premise), and on cloud and hybrid environments.   
  • Workload Automation is SAP S/4 HANA certified and is available on the SAP App Center.  
   
As recently stated (located here), by Alexandra Thurel, Director of Product Management at HCL Software: “Adopting the right workload automation solution is key for businesses. Customers need a flexible and powerful orchestrator that can achieve advanced integration and execution of business-critical processes, combining SAP and non-SAP workloads while ensuring full governance and compliance of their automation.”  

How can you make the most out of SAP by combining it with the power of Workload Automation? 
 

Here are a few examples of what Workload Automation will help you do…
 


Standardization of Tools  
  • Manage SAP and non-SAP integrated workflows so you can run all end-to-end business, infrastructure and automation processes using the same platform. 
  • Securely monitor and manage all automated workflows from a single view reducing labor and redundant tool costs.
  • Meet business critical SLAs to reduce fines and penalties. 
  • Adhere to regulatory auditing requirements to reduce rework and possible fines and penalties.
​​
Value for Investment  
  • Consolidate your workload automation resources and tools under one platform to reduce resource and licensing costs. 
  • Reduce error-prone manual processes that create delays in critical business processing. 
  • Simply migrate your SAP workloads by importing them directly into Workload Automation from your SAP environments. 
  • Create advanced solutions utilizing included feature-rich capabilities and future-proof your investment with free updates and plugins. 
​​
Ease of Use 
  • Integrate with current IT management application landscapes using built-in features including no cost APIs and plugins to take control of your business. 
  • Monitor using a state-of-the-art GUI with customizable dashboards that give you the view you need to effectively manage your workflows. 
  • Streamline management with Workload Folders to securely organize work according to your business and organizational needs. 
  • Containerize your workload automation agent and infrastructure deployments to reduce deployment and upgrade times. 

Cutting-Edge Innovation  
  • Stay compliant with built-in auditing so you can understand and report on who did what, where and why and on which ticket.  
  • Manage your workflows using built-in version control. You can even roll back to a previous version of a workflow. 
  • Use What-If impact analysis and advanced analytics to determine exactly how your workflows will run and when they will be complete.  
  • Use zero-downtime agent upgrades that allow you to upgrade an agent without causing an application batch outage during the upgrade window.  
  • Use HCL Clara, our NLP L-1 automation bot, to answer Workload Automation questions and to provide a user-friendly window into your Workload Automation workflows, complete with secure access to a library of typical actions that can be executed.  
  • Use HCL HERO (our WA infrastructure health-check monitor and recovery runbook optimizer) to monitor not only the Workload Automation application infrastructure but also to understand how your Workload Automation systems are performing using Machine Learning. The runbook optimizer provides a means to quickly recover from your WA environment from typical issues you may experience.  
​​
Cloud Adaptability  
  • Manage workflows on mainframe, virtually (on premise), on cloud, or hybrid clouds allowing you to “run-from-anywhere”.  
  • Use our “run-from-anywhere” philosophy to securely manage your application workflows across all platforms and clouds managed from a single point of control. 
  • Speed your cloud evolution with Workload Automation on AWS®, MS Azure® and the SAP App Center. 
  • Use Docker® images to deploy and manage agents and infrastructure components to quickly deploy and upgrade your Workload Automation environments. As part of our continuous-delivery strategy, new features are delivered regularly that can be used after each upgrade. 
​​
ARE YOU READY FOR THE FUTURE? 

Want to learn more?  

​Visit the Workload Automation website or drop an email to Bruce.Whitehead@hcl.com or emanuela.zaccone@hcl.com and let’s talk about how to move your SAP & automation game to the next level!

AUTHOR’S BIOS

Bruce Whitehead, Tech Sales Workload Automation
Bruce has 23+ years of Workload Automation experience. He started working in the Workload Automation arena at Best Buy, a leading US electronics retailer. After 12 years there he spent 10 years at United Health (Optum) and helped them deploy and manage one of the largest workload automation environments in the world. Currently he is a Tech Sales Specialist and Architect working for HCL Software. ​
Emanuela Zaccone, Workload Automation Product Manager
15+ years of experience in Digital Marketing and, since 2016, in Product Management. She has a background as a digital entrepreneur, with her latest Silicon Valley-based 
start-up acquired in 2019. She strongly believes in a customer-centric and data-driven approach to product management. And she is now proud to rock the future of Automation at HCL. ​

Workload Automation in Full High Availability with Automatic Master Failover

0
0
Nowadays, the global marketplace requires systems that can react to fluctuating loads and unpredictable failures, with the end-level agreement (SLA) goal of being available 99.999% of the time. Mitigating the negative impacts of failures, disasters and outages cannot even be contemplated in today’s competitive world. A disaster recovery strategy is too error-prone and resource-consuming: businesses need to operate following an “always on” model. 
This requirement makes no exception for Workload Automation that manages your business critical batch and real-time workload in a downtime-free environment. 
 
Workload Automation provides the following high availability options to help you meet the SLA goal:  
               - DWC Replica in Active/Active configuration 
               - Master Replica in Active/Passive configuration 

With Workload Automation V9.5 Fix Pack 2, the high availability configuration offers new Automatic Failover feature, without the need for additional third-party software. These options, coupled with support for zero downtime upgrade methods for agents and the entire workload environment, ensures continuous business operations and protection against data loss. 

Want to see how you can configure High Availability in a single domain network, with scalable and auto-recoverable masters? Read on to find out how to leverage the new Automatic Failover feature to reduce, to a minimum, the impact on workload scheduling, and more in general, on the responsiveness of the overall system when scenarios of unplanned outages of one or more master components occur. 

A simple Workload Automation configuration: master not in high availability

​In a configuration with single master, with both dynamic and fault-tolerant agent (FTA) workstations defined on it (Figure 1)you might already be aware of the various ways components auto-recover from a failure, or how they continue to perform their job without an active master 
  • If there are communication problems with the domain manager, the FTA can run its jobs locally. 
  • If a TCP/IP connection is not available, the store-and-forward message mechanism, queues messages to prevent the loss of job status updates coming from the FTA and dynamic agent. 
  • ​If the WebSphere Liberty Profile server process goes down, the watchdog service attempts to recover by restarting it.
Figure 1. Single master configuration with dynamic and FTA agent workstations

So, even with a simple configurationyou can still achieve certain level of fault tolerance 

And let’s not forget that you can already scale up/down your dynamic agents. For example, depending on how heavy the workload is on your workstations, you can choose to define a pool of dynamic agents and schedule your jobs on the defined pool so that jobs are balanced across the agents in the pool and are reassigned to available agents should an agent become unavailable (see this blog for more details on an elastic scale solution using Docker to configure a list of pools to which an agent can be associated). 

However, as you know, an environment that can only continue to orchestrate the jobs in the current plan at the FTA level, doesn’t quite cut it in the face of failure. You still rely on the availability of the master to schedule on dynamic agentsto extend your planand to solve dependencies between jobs running on different agents or domains. 

 Workload Automation configuration: master in high availability 

The next incremental step toward high availability is to replicate the master in our network (Figure 2). Because the WA cluster configuration is active-passiveany replicated master instances will be backup replicas.  
Figure 2 Environment with a master and several backup masters
If an unplanned outage occurs, you can simply run the commands, conman switchmgrmasterdm;BKMi and conman    
switchevtproc BKM (to switch the master and event processor to a backup), from any of your backup masters (oh and by the​ way, since V9.5, you don’t need to manually stop/start the broker application anymore), and switch the workstations’ definition types FTA<->MASTER to make the switch permanent (the full procedure in this blog here). Moreover, to be able to schedule the FINAL job stream independently from what is the current master, you can define an  XA (extended agent) workstation with $MASTER  current host workstation and that is, the host of the current master).
The FTAs and dynamic agents can link automatically to the newly selected master by reading the Symphony file and by checking the configured Resource Advisor URL list (the list of the master’s endpoints).

With this, your workload resumes enabling business continuity. 

Ok, but this requires some kind of a monitoring system that would alert IT teams of failures and, after analyzing the issue, would run the switch manager procedure. Alternatively, high availability can be achieved automatically by using an external supervisor system suitably configured, such as TSAMP (IBM Tivoli System Automation for Multiplatforms).
So you ask, how can you further minimize system unavailability? Keep reading.

Master in high availability with automatic failover enabled 

Starting with Version 9.5 Fix Pack 2, you can leverage the new Automatic Failover feature: the first backup engine to automatically verify and detect if the active master is unavailable starts the long-term permanent switchmgr to [FM1] itself; Similarly, the same applies to the event processor. The backup event processor(s) automatically verifies and detects that the active event processor (that can be different from the current master workstation) is unavailable, and then starts the long-term switchevtproc to itself.  

And not only that, but the masters (active or passive) practice self-awareness: they can check the local FTA status and, if any of the processes go down, they are able to automatically start the recovery of the FTA.

To better understand how the automatic failover feature works, let’s look at some details about the role each component plays to detect a failure or recover from it:
  1. Each backup monitors the status of the active master.
  2. Each master (active or backup) monitors their own FTA status. If any of their own FTAs go down (mailman/batchman/jobman), automatic recovery is attempted by the master (max 3 attempts).
  3. If the WebSphere Liberty Profile server goes down, the watchdog process attempts to recover it. 
  4. If the active master cannot be automatically restored in 5 min (this is the threshold set after the master is declared to be unavailable) because either:
  • The FTA and/or Liberty Server are still down
  • The engine is unable to communicate with the database
Then, a permanent switch to a backup is automatically triggered by any of the backup candidates.

Nice, right? This feature is automatically enabled and the XA workstation hosted by the $MASTER hostname with FINAL and FINALPOSTREPORTS job streams in a UNIX environment is automatically created at installation time with a WA fresh installation. Otherwise , if you are coming from a product update or upgrade, after you have migrated your backups and your master, you can enable the feature via the optman command, optman chg enAutomaticFailover = yes and  change the FINAL and FINALPOSTREPORTS job streams to move the job streams and all of the jobs from the master to the XA workstation. 

Let's suppose that you have multiple backup masters and you want to have greater control over which one of them should be considered first as the candidate master for the switch operation, well, for your convenience, you can use the following new optman options:
  • workstationMasterListInAutomaticFailover
  • workstationEventMgrListInAutomaticFailover  

These are two separate lists of workstations, each list containing an ordered comma-separated list of workstations that serve as backups for the master and event processor, respectively. If a workstation is not included in the list, then it will never be considered as a backup. The switch is first attempted by the first workstation in the list and, if it fails, an attempt is made by the second in line, and so on. 
If no workstation is specified in this list, then all backup master domain managers in the domain are considered eligible backups. This gives you an extra level of control over your backups.

For further granularity, you can choose to use only a subset of one list in the other list, or, choose to use two completely different lists. You might have a backup that can serve as the event manager backup, but you don’t want it to be considered as a potential master domain manager backup. Also, if the event manager fails, but the master domain manager is running fine, then only the event manager switches to a backup manager defined in the list of potential backups. Another example where these two distinct lists can be useful is if you have a dedicated environment for job orchestration and a different one for the event processor. You can continue to keep this separation of duties by enabling automatic failover and configuring the new optman options accordingly (Figure 3): 
​Figure 3 Separate backup lists for MDM and event processor
If you have a Dynamic Workload Console (DWC) and/or other client application based on the exposed REST API of the master, you might be worried about how to replace the new  master connection info when an automatic switch occurs. The best way you can do this is with a load balancer sitting in front of your masters and behind the DWC, and by specifying the public hostname of the load balancer as the endpoint of your engine connections in the DWC or in your client apps. In this way, you won’t have to know what the hostname of the current active master is to interact with Workload Automation features ( Figure 4). This becomes possible with another feature introduced in Fix Pack 2 that enables any backup master to satisfy any HTTP request, even in the event of a request that can be satisfied only by the active master (i.e. requests on the Workload Service Assurance) by proxying the request to/from the active master itself . 
Figure 4 Load balancer placed behind the MDM and BKMs
​DWC, Master and RDBMS in high availability: WA in full high availability 

At this point, to have a fully high available WA environment, the only thing missing is configuring the RDBMS and DWC in high availability. 

If your RDBMS has the HADR feature and you have enabled it (a Db2 example here), you can configure the liberty server’s datasource.xml file of the master and backup components, adding the failover properties,  whose key-value pairs depend on the specific RDBMS vendor. Db2’s datasource can be configured with this set of properties in the XML element named properties.db2.jcc:
<properties.db2.jcc
            databaseName="TWS"
            user="…"
            password="…"
            serverName="MyMaster"
            portNumber="50000"
            clientRerouteAlternateServerName="MyBackup"
            clientRerouteAlternatePortNumber="50000"
            retryIntervalForClientReroute="3000"
            maxRetriesForClientReroute="100"
         />
​For the DWC, instead, you just need to ensure that it is connected to an external DB (not the embedded one), replicate it, and link it to a load balancer that supports session affinity, to dispatch the request related to the same user session to the same DWC instance.     

In Figure 5, the load balancers are depicted as two distinct ones, the most general case possible, but you can also use the same component for balancing the request to the DWC and to the masters:
Figure 5 Full high availability WA environment
​At this point, congratulations, you have reached a full WA environment in high availability!

Author's Bio
​Paolo Pisa, Senior Software Developer 

Paolo joined IBM as Software Engineering working on using Full Stack technologies  and covering technical leader roles.   Starting from 2017 he works in HCL as Technical leader of WA’s Deployment Team with main focus in  Dockerizing  javaEE-based applications.  
He has a strong background in computer science technologies, data structures, algorithms and distributed computing. He is the subject-matter expert of IBM Liberty server runtime environment.
Louisa Ientile, Information Developer

Louisa works as an Information Developer planning, designing, developing, and maintaining customer-facing technical documentation and multimedia assets.  Louisa completed her degree at University of Toronto, Canada, and currently lives in Rome, Italy with a passion for food, wine, beaches, and la dolce vita. 

How to make the most out of Workload Automation and SAP Solution Manager

0
0
SAP Solution Manager is a central support and system management suite. With the aim to address questions like … 
Job Management comprises several applications to establish standardized, formal processes in order to support the management of centralized end-to-end solution wide background operations, i.e. 
    Processes for requesting new jobs, job changes or deletion of jobs 
    Documentation and central scheduling of jobs 
    Central monitoring of jobs and error handling in case of failures  

Job Management is one of the elements of the Run SAP like a Factory - concept. 
In a SAP system landscape is common to include many installed SAP and non-SAP systems and SAP Solution Manager is intended to reduce complexity and centralize the management of these systems and end-to-end business processes.  
It helps to minimize risks and to reduce total cost of ownership. SAP Solution Manager runs in the solution landscape facilitating the technical support of the distributed systems. SAP Solution Manager is a lifecycle management platform used to implement, run, and optimize SAP applications.  
The SAP Solution Manager supports the following scenarios: 
  1. Service Desk 
  2. Implementing and Upgrading SAP Solutions 
  3. Job Management
  4. Change Management 
  5. Solution Monitoring 
  6. SAP Services & Support 
  7. Root Cause Analysis 
Workload Automation provides integration with the SAP Solution Manager to enable users to manage and monitor jobs from within SAP’s Solution Manger Job Scheduling Management solution.  

This integration is based on SAP SMSE interface. 

SMSE (Solution Manager Scheduling Enabler) enables communication from Solution Manager to external scheduling tools 
  • Schedule jobs from Solution Manager in external scheduler 
  • Document jobs from external scheduler in Solution Manager 
  • Monitor jobs from external scheduler in Solution Manager 
Introducing the following benefits: 
  • Reduce the cost and complexity. 
  • Eliminate the need to manually define Workload Automation jobs into SAP Solution Manager. 
  • Reduce operational cost associated with scheduling and monitoring jobs. 
  • Increase operator's efficiency with automation of jobs and job chain management. 
  • Enable the documentation of jobs defined in Workload Automation. 
 
Addressing the following company’s challenges. 
  • Require maintaining extensive job documentation to improve business efficiencies. 
  • Avoid additional effort and costs to train users on multiple scheduling engines. 
  • Companies, must manage heterogenous landscape, requiring to automating SAP and non-SAP workloads. 
 
Standard scheduling versus External Scheduling 
  • Central place for scheduling 
  • One scheduling entity for all system 
  • Ability to schedule on SAP and non SAP systems  
  • Build complex scenarios with dependencies 
Workload Automation plugged as external scheduler in Solution Manager will … 
… extend scheduling capabilities 
… reduce training users effort related to multiple scheduling engines. 
… centralize job’s documentation. 
… increase the ability to manage and monitor business processes. 
Move in the action in three steps … 

1) registering the master domain manager on SAP Solution Manager 
 2) scheduling 
 3) monitoring 

​Want to learn more?  

Visit the
 Workload Automation websiteexplore SAP topics on the WA community, take a look at Worklod Automation on the SAP App Center or drop me an email at marco.borgianni@hcl.com to talk about how to move your SAP & automation game to the next level! ​


Author's Bio
Marco Borgianni, Senior Technical Specialist, HCL Technologies

Marco Borgianni working as Senior Technical Specialist as part of HCL services team for Workload Automation area, in the HCL Software business unit. Marco been working in the Workload Automation area since 2000 joining in IBM software Laboratory and covering several roles starting as a developer, tester, Level 3 support specialist and customer solution provider. In all these roles Marco continuously worked with ERP and Business Intelligence software especially with SAP. Nowadays I'm acting as integration architect to ERP's system especially with SAP and others player in this market. 

How to Automate Configuration Management with the Chef in Workload Automation

0
0
Let us understand Configuration Management this way – assume you must dispose a software over hundreds of machines. This software can be an operating system or a code or it can be an update of a currently running software. It is possible for you to do this task manually, but what results if you have to complete this assignment overnight due to any important mass event taking place at your organization next day in which heavy traffic is foreseen. Even if you were prepared to do this by-hand there is a high chance of recurring errors on your big day. In such a case, returning to the previous stable version, will not be possible to do manually with ease.

To solve this problem, Configuration Management was introduced. By using Configuration Management tools like Chef, we can achieve this.
In order to let Workload Automation users make the most out of Chef, we have added two plugins on the Automation Hub, the catalogue of Workload Automation integrations to automate more and better.

We have divided two major functionalities of Chef into two Chef plugins, they are as follows, The  ChefBootstrap plugin enables you to schedule and monitor the installation of  Chef client on one or more nodes, you can also define your Chef serve authentication credentials and register the nodes to make them communicate with the Chef server.

The  ChefRunlist  plugin enables you to schedule and monitor the execution of cookbooks and recipes configured on a  Chef server on the nodes. 
 
Now let us see how these both plugin works.

Prerequisite for each plugin is that you should have chef workstation setup and chef server configuration in your agent to connect your chef server.  
 
ChefBootstrap:
How to bootstrap a node using ChefBootstrap plugin: 
 
Log  in to the  Dynamic Workload Console  and  open the  Workload Designer.  Choose to create a new job and  select  “ChefBootstrap” job type in  the  Cloud section. 
Establishing connection to the Chef server: 
 
In the connection tab specify the repository path of the configuration and pem files in the Repo path field to let workload Automation interact with chef server and click Test Connection. A confirmation message is displayed when the connection is established. 
 
Note: The configuration file should have: 
Node_name = Organization name of the chef server  
client_key = Location of the pem path  
chef_server_url = Chef server url  
chef_license 'accept' = This has to be added only if you are using chef-client version > or =15 
Bootstrap a Node: 
 
In Action tab specify the node machine details which you want to bootstrap. A node could be any physical, virtual, or cloud device.  
Provide the host name of the node. Provide either a password or path to the file in which you stored your SSH private key and a node name to establish a connection to node machine. 
Click Test connection to verify that the connection to the node. A confirmation message is displayed when the connection is established. 
Submitting your job:
​ 
It is time to Submit your job into the current plan. You can add your job to the job stream that automates your business process flow. Select the action menu in the top-left corner of the job definition panel and click on Submit Job into Current Plan. A confirmation message is displayed, and you can switch to the Monitoring view to see what is going on. 
 
Track/Monitor your Job: 

You can also easily monitor the submitted job in WA through navigating to “Monitor Workload” page. 
Select the job and click on job log option to view the logs of the ChefBootstrap job. 
Here, you can see that the Chef-client is installed in the node and connection is established successfully between Chef server and the node.

 
Extra Information: 
 
You can see that there are few “Extra properties” provided by the plug-in which you can use these variables for the next job submission. 
ChefRunlist: 
 
How to add recipes to a node using ChefRunlist plugin: 
 
Log in to the Dynamic Workload Console and open the Workload Designer. Choose to create a new job and select “ChefRunlist” job type in the Cloud section. 
Establishing connection to the Chef server: 
 
In the connection tab specify the repository path of the configuration and pem files in the Repo path field to let workload Automation interact with chef server and click Test Connection. A confirmation message is displayed when the connection is established. 
Note: The configuration file should have: 
Node_name = Organization name of the chef server  
client_key = Location of the pem path  
chef_server_url = Chef server url  
chef_license 'accept' = This has to be added only if you are using chef-client version > or =15 
Add Recipes to a Node: 
 
In Action tab specify the recipes that needs to be executed on the selected nodes.
Click on the search button under Recipes to lookup the list of recipes from the chef server. Multi select the recipes which you want to apply on nodes.
Submitting your job:
 ​ 
It is time to Submit your job into the current plan. You can add your job to the job stream that automates your business process flow. Select the action menu in the top-left corner of the job definition panel and click on Submit Job into Current Plan. A confirmation message is displayed, and you can switch to the Monitoring view to see what’s going on. 
Track/Monitor your Job: 

You can also easily monitor the submitted job in WA through navigating to “Monitor Workload” page. 
Select the job and click on job log option to view the logs of the ChefRunlist job. 
Here, you can see that the selected recipes are applied to the selected nodes.

Extra Information: 
You can see that there are few “Extra properties” provided by the plug-in which you can use these variables for the next job submission. 
Therefore, ChefBootstrap and ChefRunlist plugins in Workload Automation is a best fit for those who are looking for complete automation of the configuration management.

Are you curious to try out the Chef plugin? Download the integrations from the Automation Hub and get started or drop a line at santhoshkumar.kumar@hcl.com.

Author's BIO
Dharani Ramalingam, Senior Java Developer at HCL Technologies

Works as a Plugin Developer in Workload Automation. Technology enthusiast who loves to learn new tools and technologies. Acquired skills on Java, Spring, Spring Boot, Microservices, ReactJS,  NodeJS, JavaScript, Hibernate.

Arka Mukherjee, Quality Analyst at HCL Technologies

Working as Quality Analyst for the Workload Automation team in HCL Software, Bangalore. Worked both in manual and automation test scenarios across various domains
Rabic Meeran K, Technical Specialist at HCL Technologies

Responsible for developing integration plug-ins for Workload Automation. Hands-on with different programing languages and frameworks like JAVA, JPA, Spring Boot, Microservices, MySQL, Oracle RDBMS, Ruby on Rails, Jenkins, Docker, AWS, C and C++.

WAz: A new easy way to Create, Modify, Replace and Backup Variable Tables

0
0
The Workload Automation Programming Language (WAPL) interface allows to easily manipulate Variable Tables through batch jobs. 

In fact, it supplies a  Batch Loader-like processor  through which it is possible to  CREATE  and  MANIPULATE variable tables inside IBM Workload Scheduler Database. 

For example, through the following WAPL JCL: 
You can create the MYVARTAB table in the database.  Note that the BL-like statements can also be contained in a dataset. 

​You can create, update and replace a variable table using respectively the CREATE, UPDATE and REPLACE option in the DBMODE parameter.
 
But the most interesting capability is in the possibility to  BACKUP  a variable table, in order to recreate it later or in another WAz installation. 

Using the following commands, in a WAPL JCL: 
You obtain the following statements: 
That, through another WAPL jcl, you can run to create the same table, or change to create a different one. 
 
…and even backup all the tables is possible: 

Author's BIO
Raffaella ViolaAdvisory Software Engineer, HCL 

Raffaella Viola is currently part of the Workload Automation for z/OS development team. In this role she has the responsibility to analyze customer requirements, design and implement related solutions. She started her experience with IBM Workload Scheduler for z/OS in 2006, covering different roles both in development and in customer support L3 team, where she faced with real field problems and had many opportunities to analyze customer needs. Raffaella has been working in IBM since 1992 on more zSeries products such as IBM Netview Distributed Manager and IBM Tivoli Decision Support. She graduated with honors in Electronic Engineering, in Italy and got piano conservatory graduation. She likes cooking and listening to music.  
Ilaria  Rispoli , Client Advocacy Manager , HCL

Ilaria Rispoli works in Workload Automation area and is leading the Advocacy Program for both on-premises and cloud solutions of the product in Europe, Middle East, Africa, and worldwide. She started her experience with IBM Workload Scheduler in 2000, covering different roles in development, customer support and verification team. Since 2016, thanks to her customer interaction experience, she has been appointed to lead the HCL Advocacy Program striving to deliver a high-touch, highly interactive approach to customer relationships, and to provide the greatest value and service to customers through strong connections. 

How to make the most out of REST APIs in  Workload Automation

0
0
REST API’s in WA have been around since long, but their Usage is still not clear amongst most WA Admins using the product since long , this Blog aims to provide an Introduction to WA REST API’s and attempts to clear some doubts in regards to their Usage . 
The REST API is executed using the curl binary. The Curl binary executing the REST API hits a Service URL and passes a JSON Input to the Service.  
The Service URL varies depending on the operation performed using REST. 

It could either be a REST POST Request, a REST GET Request or a REST PUT Request. 

The response from a REST request is also in the form of a JSON Response. 

REST API Calls in Workload Automation are made through a GUI and the GUI URL can be accessed using the following link     https://FullyQualifiedNameofMDM/twsd . 

The UI looks as below , the classification of REST operations is done interms of engine , eventrule, model(DB) , plan and security . 
Running a Job Query to get the Plan Job ID : 

Plan Job ID can be used in many plan related operations for a Job . Extracting plan Job ID is of importance and is the first step in all such Operations . 

To Extract the Plan Job ID for a Job named TESTJOB the below JSON query can be executed : 
 
{ 
 
   "filters": { 
 
      "jobInPlanFilter": { 
 
      "actualKey": "S_MDM;JOBS;TESTJOB" 
 
       } 
 
      } 
 
     }, 
 
      "sorters": { 
 
       "jobInPlanSorter": { 
 
        "jobstreamScheduledTime": { 
 
        "descending": false, 
 
        "priority": 1 
 
        } 
 
       } 
 
      } 
 
} 
 
The “How Many” Field is also to be set, to receive the appropriate response. 
The Output returned is in the form of a JSON Response as follows , the output response includes the id field which is the Plan Job ID (in alphanumeric) . In this case the ID returned is “27371e93-9112-3798-a8fe-10f44473190a”. 
Extrapolating the Scenario to confirm the Job to SUCCESS : 

The job when viewed from conman “sj S_MDM#JOBS.TESTJOB” would be as follows , this is a job submitted with confirmed option , so that the true Status can be managed by the Operator post checks. 
The Confirm Success REST API Query can be executed by easily passing the planid which is current alongwith the jobid which is Plan Job ID returned from previous Step : 27371e93-9112-3798-a8fe-10f44473190a. 
The response returned is as below and the Success return code in this case is 202. 
The Output when verified through conman “sj @#@.TESTJOB” shows that the Job was marked to SUCCESS . 
Submitting a Jobstream through REST API : 

While Submitting a jobstream through REST API the Model ID of the Jobstream is to be fetched  
from GET /model/Jobstream : 
The Jobstream Name and the Workstation Name are supplied as input to the REST Call : 
The Output returned is a JSON Response , the JSON Response includes an ID in the Header section which is the Jobstream ID of the Jobstream , in this case this is 94938606-bdeb-b5cc-53816b8eea04. 
Next a make_Jobstream function is to be executed to create a Make_Jobstream instance in plan Object , the input parameters in this case is the DB Jobstream ID , plan ID which is current plan and a JSON input supplying input arrival time : 
This returns a JSON Response which is the make Jobstream in plan Object to be supplied as input to the Submit Jobstream REST API Query .
JSON Response retrieved : 

The Json response received is to be modified to include the input arrival time field and the same can then be used for submission : 
 
{ 
  "key": { 
    "name": "TESTJOB", 
    "startTime": "2019-10-25T07:38:49", 
    "workstationKey": { 
      "name": "S_MDM" 
    } 
  }, 
  "jobStreamDbIdentifier": "94938606-bdeb-3504-b5cc-538a6b8eea04", 
  "workstation": { 
    "name": "S_MDM" 
  }, 
  "inputArrivalTime": "2019-10-25T07:38:49", 
  "productionDate": "2019-10-25T10:48:03", 
  "scheduledDate": "2019-10-25T00:00:00", 
  "jobStreamStats": { 
    "numberOfJob": 0, 
    "numberOfCurrentNodes": 0, 
    "numberOfSuccessfullJob": 0, 
    "numberOfDeltaJob": 0, 
    "numberOfDeltaSuccJob": 0, 
    "numberOfNotRunningJob": 0, 
    "numberOfExecutingJob": 0, 
    "numberOfAbendedJob": 0, 
    "numberOfFailedJob": 0, 
    "numberOfSkelJob": 0, 
    "numberOfUndecidedJob": 0 
  }, 
  "limit": -1, 
  "resDepSequenceNum": -1, 
  "followsDepSequenceNum": -1, 
  "inOrder": false, 
  "replicated": false, 
  "carriedForward": false, 
  "carryForward": false, 
  "dontTouch": false, 
  "userJobs": false, 
  "thisCpu": false, 
  "external": false, 
  "needResources": false, 
  "hasResources": false, 
  "released": false, 
  "pendingCancellation": false, 
  "aliased": false, 
  "every": false, 
  "hasInternetworkDependencies": false, 
  "heldByUser": false, 
  "lateJobStream": false, 
  "pendingPredecessor": false, 
  "zombie": false, 
  "jobs": [ 
    { 
      "name": "TESTJOB", 
      "jobDefinition": { 
        "jobDefinitionInPlanKey": { 
          "name": "TESTJOB", 
          "workstationInPlanKey": { 
            "name": "S_MDM" 
          } 
        }, 
        "description": "Sleep Job", 
        "taskType": "UNIX", 
        "command": false, 
        "definedByJsdl": false, 
        "taskString": "sleep 200", 
        "taskStringInfo": "sleep 2", 
        "returnCode": 0, 
        "interactive": false, 
        "userLogin": "twsadmin", 
        "recoveryOption": "STOP", 
        "estimatedDuration": 201000, 
        "recoveryRepeatInterval": 0, 
        "recoveryRepeatAffinity": false 
      }, 
      "workstationInPlan": { 
        "name": "S_MDM" 
      }, 
      "jobStreamInPlan": { 
        "name": "TESTJOB", 
        "id": "94938606-bdeb-3504-b5cc-538a6b8eea04", 
        "startTime": "2019-10-25T07:38:49", 
        "workstationKey": { 
          "name": "S_MDM" 
        } 
      }, 
      "dependencies": {}, 
      "priority": 10, 
      "originalPriority": 10, 
      "repeatInterval": -1, 
      "cpuTime": 0, 
      "timeInfo": { 
        "estimatedDuration": 0, 
        "elapsedTime": 0 
      }, 
      "timeRestriction": { 
        "timeDependent": false 
      }, 
      "jobNumber": 0, 
      "headRecordNumber": 0, 
      "status": { 
        "canceled": false 
      }, 
      "rerunInstancesNumber": 0, 
      "lastInRerunChain": false, 
      "dependenciesStats": { 
        "numberOfDependencies": 0, 
        "numberOfJobDependencies": 0, 
        "numberOfJobStreamDependencies": 0, 
        "numberOfInternetworkDependencies": 0, 
        "numberOfPromptDependencies": 0, 
        "numberOfResourceDependencies": 0, 
        "numberOfFileDependencies": 0, 
        "numberOfUnresolvedDependencies": 0, 
        "numberOfNonResourceUnresolvedDependencies": 0 
      }, 
      "criticalNetworkInfo": { 
        "critical": false, 
        "onCriticalPath": false, 
        "late": false, 
        "longRunning": false, 
        "promotedToUrgentQueue": false, 
        "riskForJobStreamLimit": false, 
        "riskForJobStreamPriority": false, 
        "riskForJobStreamFence": false, 
        "riskForJobStreamSuppressed": false, 
        "riskForJobPriority": false, 
        "riskForJobFence": false, 
        "riskForStartTime": false, 
        "riskForWorkStationIgnore": false, 
        "plannedRestriction": { 
          "timeDependent": false, 
          "minDuration": -1, 
          "maxDuration": -1 
        }, 
        "every": false, 
        "restarted": false 
      }, 
      "sigma": 0, 
      "estimatedEndSigma": -1, 
      "recordNumber": 0, 
      "monitored": false, 
      "aliased": false, 
      "buckujob": false, 
      "centralized": false, 
      "centralizedSatisfied": false, 
      "dontTouch": false, 
      "every": false, 
      "external": false, 
      "hasResource": false, 
      "heldByUser": false, 
      "jobLate": false, 
      "needMessage": false, 
      "needResource": false, 
      "pendingCancellation": false, 
      "programmatic": false, 
      "recoveryRerunJob": false, 
      "released": false, 
      "replicated": false, 
      "rerunJob": false, 
      "restarted": false, 
      "running": false, 
      "successPending": false, 
      "untilGone": false, 
      "userJob": false, 
      "userRerunAgain": false, 
      "wildcarded": false, 
      "requiresConfirmation": false, 
      "minDurationNotReached": false, 
      "maxDurationGone": false, 
      "position": 0, 
      "dependencySequenceNumber": -1, 
      "noOperation": false, 
      "recoveryDefinition": false, 
      "everyRerun": false, 
      "rerunStep": false, 
      "recoveryRepeatIterations": 0, 
      "delayRerun": false, 
      "autogenerated": false, 
      "pollingJobUsed": false, 
      "pendingPredecessor": false, 
      "altJobIssue": false, 
      "interactive": false, 
      "definedByJsdl": false, 
      "ignoreFlags": false, 
      "canceledInSuccStatus": false, 
      "msgGenerated": false 
    } 
  ], 
  "priority": 10, 
  "origPriority": 10, 
  "monitored": false, 
  "status": { 
    "canceled": false 
  }, 
  "timeRestriction": { 
    "untilTime": "2019-11-24T07:38:49", 
    "timeDependent": false, 
    "untilAction": "SUPPRESS", 
    "minDuration": -1, 
    "maxDuration": -1 
  }, 
  "timeInfo": { 
    "estimatedDuration": -1, 
    "elapsedTime": 0 
  }, 
  "timeZone": "America/Chicago", 
  "cpuTime": 0, 
  "recordNumber": 0, 
  "dependenciesStats": { 
    "numberOfDependencies": 0, 
    "numberOfJobDependencies": 0, 
    "numberOfJobStreamDependencies": 0, 
    "numberOfInternetworkDependencies": 0, 
    "numberOfPromptDependencies": 0, 
    "numberOfResourceDependencies": 0, 
    "numberOfFileDependencies": 0, 
    "numberOfUnresolvedDependencies": 0, 
    "numberOfNonResourceUnresolvedDependencies": 0 
  }, 
  "sigma": -1, 
  "estimatedEndSigma": -1, 
  "dependencies": {}, 
  "zosSpecificAttributes": { 
    "rerunRequested": false, 
    "addedToCurrentPlan": false, 
    "latestOutPassed": false, 
    "remainingDurationCriticalPath": 0, 
    "remainingOperationsCriticalPath": 0 
  } 
} 
 
 
The Jobstream can be submitted by using the Jobstream ID , plan ID and using the JSON Response within the SubmitJobstreamInfo Field : 
The Jobstream submission post REST API Submit shows the UNSIONSCHEDID of the Jobstream in the response : 
The conman output post submission shows the UNSIONSCHEDID in the conman “sj @#TESTJOB” output : 
REST API’s is most useful while working on Integration with external Products which donot have ready made plugin for available for Integration . 

Author's BIO
Sriram V 

I’ve been working with Workload Automation for the last 11 years in various capacities like WA Administrator , SME , India-SME , later joined the Product team supporting Workload Automation on SaaS, recently moved to Tech Sales and Lab Services of Workload Automation. 

Get yourself a moving buddy - composer rename

0
0
If you have ever had to move, you know exactly how painful it can be, for sure. 
Carefully packing all of your belongingseach of them in the right boxes, is the first mandatory step. 
Keeping track of all the boxes, and all the stuff in each box is hard but necessary: you don’t want to lose your favorite mug or teddy bear. 
Moving tens or hundreds of boxes requires a huge effort, maybe a moving van and an extra pair of hands to help. 
You need someone you can trustwho will take care of your most valuable items during the transferjust like you would do yourself. After all, most of those items have a long-standing history behind them, that only you know. 
And upon arrival at the destinationyou will need to place each box in the right room. 
And finally, unpacking and finding just the right spot for your beloved belongings in their new home.. 
 
No one should have to go through this all aloneEven Woody in the first Toy Story movie warns you about thatHas everyone picked a moving buddy? […] I don't want any toys left behind. A moving buddy—if you don't have one, get one! 
 
For your Workload Automation migration toward folders, everything has been taken care of! 
The composer rename will be your perfect moving buddy. Let’s see how it can help you to move all your workload definitions in a few steps to the final destination. 
 
If you have hundreds of jobs, job streams, workstations, resources and so on, that you have meticulously arranged according to a specific naming convention, you will need to create the folders and subfolder structure to match that organization. 
 
This is the first step and you can perform it from the Dynamic Workload Console or from the composer command line 
Once you have created all the destination folders, you can start the actual moving. 
 
All of your soldiers can be moved at one time directly under their commander SARGE, that of course, is part of the original TOY_STORY. 
 
You just need to run  
composer rename jd @#TS1_SAR_BOS@#/TOY_STORY/SARGE/BUCKET_O_SOLDIERS@;preview 
 
This will let you check how your soldiers will be moved into their bucket without actually moving them.  
Now that you have checked that the wildcard you are using matches the right object selection, and the results are what you expect, you can run the command again without the ;preview option and let your moving buddy do the job and rename your definitions with a longer and more user-friendly name. 
 
Remember: If you want to be sure you don’t leave any definitions behind… “A moving buddy—if you don't have one, get one!”. 
 
Syntax 
Let’s go deeper into the command syntax: 
  • composer rename is the main command 
  • then you need to specify the object type you want to move (see the full list below) 
  • then you need to specify the matching rule to match the existing names with the new ones 
  • eventually, you can add the optional parameter ;preview to check before the actual execution ​​

Let’s see how the matching rule works: 
You can use either the or ? wildcard to match the exact names and they will be considered in positional order. 
 
So, in the previous example, the order of the two @ will be respected after the rename: 
@#TS1_SAR_BOS@ @#/TOY_STORY/SARGE/BUCKET_O_SOLDIERS@ 
 
You can use the composer rename command to move all the following objects by using the long or short keyword 
Another example 
For example, in case you need to move your workstations AAA_BBB_WKS1 and AAA_BBB_WKS2 to the folder /AAA/BBB and rename them at the same time with a more user-friendly name (workstation_1 instead of WKS1 and so on). 
 
Here’s what you need to do: 
  • create folder /AAA 
  • create folder /AAA/BBB 
  • run composer rename ws @_@_WKS/@/@/workstation_?;preview 
  • check that the results are /AAA/BBB/workstation_1 and /AAA/BBB/workstation_2 as expected 
  • run composer rename ws @_@_WKS? /@/@/workstation_? to apply the changes 
 
Note that now that you can move part of the naming convention used in the workload definition names into folder names, you save space and free up characters in the workload definition names. 
 
Remember that, if you are using a Db2 database, in case of massive changes on your workload definitions, it’s recommended to run a dbreorg and dbrunstats. You can find these scripts under the <installation_dir>/TWS/dbtools/db2/script directory. 

References 
For a complete reference see the Organizing scheduling objects into folders topic in the User’s Guide and Reference. 

Author's BIO
Eliana Cerasaro, Technical Lead  

Eliana Cerasaro has worked in the Workload Automation area since 2006. In 2016, she moved from IBM to HCL Technologies and is currently part of the distributed development team of Workload Scheduler as a Technical Lead. She specializes in design and development of backend applications and databases. 
Enrica Pesare, User eXperience Designer – Workload Automation 

Enrica
 is a Computer Scientist highly focused on Human Centered Design and User Experience. After completing a PhD at the University of Bari (Italy) she joined the HCLSoftwareRomeLabas a Software Engineer in the Quality Assurance team of Workload Automation and as the owner of the Workload Automation production environment.Since 2018,she is part of the UX design team for Workload Automation. 

It’s time to think about upgrading from IWS from v9.3 to v9.5!

0
0
IBM Workload Scheduler 9.3 will go out of support, effective from April 2021, so don’t wait any longer to upgrade! Create a promising future for your business by switching to v9.5. Upgrade to a streamlined, efficient and secure version. 
 
The objective of this blogpost is to provide step by step guide on what needs to be considered before upgrading IWS MDM from v9.3 to v9.5. We also provide you with a step by step guidance on the usage. 
 
The blog also gives brief description of the key features provided with the latest version 

Scope 
The scope of this blog is limited to users who have: 
 
  • IWS v9.3 installed   
  • Need to upgrade to v9.5 from v9.3  
  • Need to migrate DB to the latest version  
  • Want to use the products latest features. 
 
What to considered before upgrading 
 
The preparation stage is by far one of the most important phases during an upgrade.  When upgrading to a major release, like v9.3 to v9.5, one must take into consideration crucial factors such as features and infrastructural differences.    
 
The basic architectural approach of an upgrade has not changed, IWS still requires a parallel upgrade for the MDM and support still direct upgrade for the agent. Customers can also decide from a bottom-up or top-down upgrade. 
 
You can fully benefit from the IWS v9.5 features such as Workload folderwhen all the IWS network is at v9.5 FP2 then the bottom-up approach upgrade is recommended. 
 
IWS v9.5 installation is simplified and is made faster than ever before, leveraging the Centralized Upgrade feature, the install and setup of IWS v9.5 should be a smooth transition.  
 
 “Learning” IWS v9.5 is the key point to be fully productive on the release soon after the Upgrade. 
 
IWS administrators should be trained on how to operate the new DWC 9.5 flow and usage of IWS v9.5 releases so they are prepared on “how-to-do” things on IWS 9.5 
But don’t worry!  There are IWS users who already did that with Success…. read their experience here: https://twsuser.org/links/webinars/ 
 
Keys to a Successful IBM Workload Scheduler v9.5 Migration – a User Perspective  by Tim Townsend, Boston College – February 19, 2020”  
 
In the next steps, you will learn how to use IWS v9.5 successfully. 
 
In the preparation phase of an upgrade, different actions should be taken: 
 
  1. Learn the new features of the IWS v9.5 releases: 
 
Start by reading here: 
 
Release Note 9.5  
https://www.ibm.com/support/pages/ibm-workload-scheduler-version-95-release-notes 
https://www.ibm.com/support/pages/dynamic-workload-console-version-95-release-notes 
From the release notes you can navigate on - What’s New pages- about the 9.5 releases. 
 
See IBM Workload Automation V9.5 dedicated playlist: 
IBM Workload Automation V9.5  
https://www.youtube.com/playlist?list=PLZ87gBR2Z807JeITkFzaFOkg-zi2GYPRl 
 
Some highlights of new features released with 9.5 FP2. 
https://www.youtube.com/playlist?list=PLZ87gBR2Z807g-kAf2wus2c2JwqWkCWh2 
 
Read v9.5 blogs: 
http://www.workloadautomation-community.com/blogs 
 
Engage with the IWS community on “Ask-the-Expert” Sessions on new v9.5 features! 
 
Contact us on: 
http://www.workloadautomation-community.com/forum.html 
 
Take IWS 9.5 Courses 
 
Engage in HCL services to have an ad-hoc course on IWS v9.5: 
https://www.hcltechsw.com/wps/portal/products/workload-automation/services 
 
     2.  
Review the Detailed system requirements for IBM Workload Scheduler and DWC, v9.5, 
here you can find all the OS products and addition products supported by IWS v9.5:  
IWS:  Detailed System Requirements page. 
https://www.ibm.com/support/pages/node/742497DWC: Detailed System Requirements page. 
 

This review is important as IWS v9.3 supported OS and product version is no longer supported by IWS 9.5 releases. 
 
    3.  
Review the following document to determine the new setup for IWS machines and their configuration for IWS/DWC 9.5.   
http://www.workloadautomation-community.com/resources.html 
Section -WA Documentation and Performance Scenarios 
where IWS 9.5 Performance and Capacity document is present. 
 
     4.   Last but not least, leverage the benefits of the HCL Services team that is composed of a Professional IWS team who can provide you with ad-hoc learning on IWS v9.5. The team can also work to assist you with the IWS upgrade, implementation and design.  
 
IWS 9.5

IWS v9.5 offers a lot of new features that are useful to administrators, the scheduling operators, developers, analysts, configurator and application programmers, so let’s start talking about some of them: 
 
Command-line based installation 
Classical GUI installation is no more necessary, now there is the command-line installation, a very simple procedure that supports installing all components (master domain manager, backup domain manager, dynamic domain manager, backup dynamic domain manager, Dynamic Workload Console, and agents) using dedicated commands.  
For more information, see Typical installation scenario. 
 
The Dynamic Workload Console has evolved 
The Dynamic Workload Console has evolved with a new graphical layout, features, and improved functionality. 
Dynamic Workload Console V9.5 has undergone both an architectural and web redesign.  
The streamlined design of the console accommodates several features that improve the overall user experience to deliver results for your business: 
  • A new live dashboard experience enables smart troubleshooting use cases for proactive incident management. 
  • New integrated web help system. 
  • Customizable options to make your most commonly used or critical operations more accessible with pins and favorites. 
Would you like to know more? HWA Professional Service Team can give you all the support you need. Please feel free to contact them. 
https://www.hcltechsw.com/wps/portal/products/workload-automation/services 
 
New Reporting System 
 
With Dynamic Workload Console V9.5, the look and feel of the reporting system have undertaken a revamp and you can now import the report templates created by using Business Intelligent Report Tool (BIRT). 
For more information about the new reporting system, see Reporting. 
 
Event Rule has never been so easy 
brand-new experience that optimizes the creation and editing of event rules.  
A modernized user experience that makes the definition of event rules easier, intuitive, and well-organized, thanks to a folder-based structure.  
Automation Hub: the future of automation 
Automation Hub is the new automation command center that empowers your Workload Automation with new integrations. 
A showcase where you can browse through the existing integrations, but also discover a bunch of integrations to bring your experience to the next level. 
Some of the old plug-ins previously provided with the product are now out-of-the-box integrations available on Automation Hub 
The integrations are available for distributed and z/OS environments. 
For further information, see Changed features and feature capabilities. 
 
Continuous operation with automatic failover 
Automatic switchover to a backup engine and event processing server when the active master domain manager becomes unavailable.  
Ensure continuous operation with the automatic failover and high availability features of IBM® Workload Scheduler. Configure one or more backup engines so that when a backup detects that the active master becomes unavailable, it triggers a long-term switchmgr operation to itself. The backup engines monitor the behavior of the master domain manager to detect anomalous behavior. If one or more of the following behaviors persist for more than 5 minutes, an automatic long-term switchmgr operation is triggered:  
  • WebSphere Application Server Liberty Base is down.  
  • The fault-tolerant agent of the master domain manager is monitored to check on the status of processes such as Batchman, Mailman, and Jobman.  
  • The engine can no longer contact the database, for example, due to a network outage.  
 
Organize scheduling objects in workflow folders 
Gain greater business agility by organizing your scheduling objects in a hierarchy of folders. Organize them by lines of business, departments, geographies, or any custom category that makes sense for your business operations. 
Folders also simplify security access. Associate access control lists to individual folders to restrict the rights to which folder any single user or group can access. You can also delegate security management of a specific folder and its sub-folders to other users. 
For more information see Organizing scheduling objects into folders. 
 
Finally, but certainly not least, I wish to recall the importance of Workload Automation Professional Services now available on the following website: https://www.hcltechsw.com/wps/portal/products/workload-automation/services 
For any help or suggestion HWA professional service team is available to give you highest level of support in order to assist you during implementation phase of these features. 
 
As you have seen, IWS v9.5.0X comes with a lot of interesting features then don’t waste our time, let’s go to talk about how to upgrade to IWS v9.5.0X so then we can start using them. 
 
The following procedure will install a new master domain manager with own DB and link it to current network. Then will import objects of current MDM  into new DB and switch the linked MDM to become the new master domain manager. 
 
The procedure can be summed up with the following main steps: 
 
IWS v9.5.0X: 
  1. DWC and MDM Installation with last FixPack available with v9.5: the installation can be done downloading the compressed file, containing both the General Availability version 9.5 image and the latest fix pack image. HWA Professional Service Team gives the highest professional support in achieving this task  
  2. DWC – MDM Post installation configuration: Setup Security and logon configurations information. 
  3. Network tests: MDM-DWC connection and JS/Jobs summit tests information. 
  4. Current Plan scratch: Erased current plan  
 
IWS v9.3: 
    5.  Link new MDM to current network: A new BMDM WS pointing to the new MDM will be created.  
   6.   
Export v9.3 Object data and Import to MDM v9.5.0X: The objects will be exported from the old DB and imported into the new one 
    7.   Configure current IWS network to link to the new MDM: New configuration of entire network to point at new MDM 
    8.   Switch network to new MDM: Using switchmgr conman command

 
IWS v9.5.0X 
  9.
Update current plan with new network configuration: The current plan will be extendewith the new network configuration 
   10.  Install a new BMDM if required: A new BMDM will be added to current network 
 
For more detailed information refer to Upgrade_to_V9.5 document on: 
http://www.workloadautomation-community.com/resources.html 
Section -WA Documentation and Performance Scenarios
 
If you have any questions regarding the upgrade or require more information about the new features, don’t hesitate to ask our HWA Professional Service Team 
 
Recommendations 
It’s highly recommended to start using the new FOLDER feature when all IWS network is at v9.5 FP2 level to leverage the full benefits of that feature. 
 
For more information about the Behavior of the Feature Workload Folder in mixed IWS environment, please read the Compatibility scenarios and limitations chapter in the release notes for IWS v9.5.  
https://www.ibm.com/support/pages/ibm-workload-scheduler-version-95-release-notes 
https://www.ibm.com/support/pages/dynamic-workload-console-version-95-release-notes 
 

Conclusion 
If you need further clarification, the HWA Professional Service Team can answer all your questions, so don't hesitate to get in touch! Click here - https://www.hcltechsw.com/wps/portal/products/workload-automation/services 

Author's BIO
Donatella Sabellico, Senior Technical Specialist, HCL
Donatella Sabellico working as Senior Technical Specialist as part of HCL IWS L3 Support team , in the HCL Software business unit. Donatella been working in the Workload Automation area since 2000 joining in IBM software Laboratory and covering several roles starting as a developer, Senioer Lead Level 3 support specialist and customer solution provider. 
Claudio Zilliotto, Workload Automation Services, HCL
Claudio has been working on Workload Automation for the last 19 years like IWS Administrator making Installation – configuration of IWS products on unix, windows, os/400 and implementing cross-dependency solutions between IWS – IWSz environments. He also gave support in rehosting activities and providing educational training of IWS scheduling products to customers. He is currently part of the HCL Workload Automation services team, but he’s often involved on test activities of new HWA features. 

WAz: An easy way to get operation's successors in the database before loading it in the plan

0
0
A  common need, coming from customers, is the possibility to list operation’s  Successors in the  Database, as it happens in the Current Plan (after the operation is loaded in the plan). At moment, through WAz ISPF Dialog, it is possible to access only to Predecessor list for operations in the Database.   

The WAPL (Workload Automation Programming Language) interface allows to  easily  get over this limitation in WAz,  giving the possibility to see successors to a job in the database, including external ones. 
Through such a WAPL JCL, it is possible to list the characteristics of  RAFJOB job belonging to the APPLWAPL application: 
In particular it is possible to list both, internal and external successors. 
The result can be in Batch Loader–like format as follows, if you choose  STYLE(LOADER)  option: 
Or you can choose a different option,  STYLE(TEXT),  to get the following format: 
Finally, you can also enquire for RAFJOB related to  ALL  the application where it is contained: 

Author's BIO
Raffaella ViolarAdvisory Software Engineer

Raffaella Viola is currently part of the Workload Automation  for z/OS development team. In this role she has the responsibility to analyze customer requirements, design and implement related solutions. She started her experience with IBM Workload Scheduler for z/OS in 2006, covering different roles both in development and in customer support L3 team, where she faced with real field problems and had many opportunities to analyze customer needs. Raffaella has been working in IBM since 1992 on more zSeries products such as IBM  Netview  Distributed Manager and IBM Tivoli Decision Support. She graduated with honors in Electronic Engineering, in Italy and got piano conservatory graduation. She likes cooking and listening to music.  
Ilaria  Rispoli, HCL Client Advocacy Manager 
 
Ilaria Rispoli works in Workload Automation area and is leading the Advocacy Program for both on-premises and cloud solutions of the product in Europe, Middle East, Africa, and worldwide. She started her experience with IBM Workload Scheduler in 2000, covering different roles in development, customer support and verification team. Since 2016, thanks to her customer interaction experience, she has been appointed to lead the HCL Advocacy Program striving to deliver a high-touch, highly interactive approach to customer relationships, and to provide the greatest value and service to customers through strong connections.

Enterprise Resource planning orchestration with Workload Automation and SAP HANA XS Engine

0
0
Before knowing about our plugin use cases and how it benefits to our Workload automation users, let us have little insight about what is SAP HANA Extended Application Services (XS). 

SAPHanaXs Engine is a key aspect of the SAP Hana Platform. It provides a comprehensive platform for the development and execution of micro-service-oriented applications, taking advantage of SAP HANA's in-memory architecture and parallel execution capabilities.  
The SAPHanaXs Engine is simple to use and ensures optimum performance since it is tightly integrated with SAP Hana database.  
In order to give better service to Workload Automation users and to get more benefit from SAPHANAXs, we have added this plugin on the Automation Hub, the catalogue of Workload Automation integrations to automate more and better. 
 
The SAP HanaXs Engine integration enables you to activate or deactivate the scheduled jobs configured on a SAP Hana XS Engine server. 
Streamline the configuring and managing processes by using the SAP Hana XS Engine  integration. You can activate or deactivate the scheduled jobs defined on the SAP Hana XS Engine server and monitor them from a single point of control. 
 
Let us begin, 

Prerequisite for this plugin is that you should have XS engine up and running. 
Log  in  to the  Dynamic Workload Console  and  open the  Workload Designer.  Choose to create a new job and  select “ SapHanaXsengine” job type  in  the Business Analytics section  
  • Connection Tab 
Establishing connection to the SAP Hana XS Engine server: 
We need to fill out certain info for below categories. 
 
Connection Info > Hostname, Port , Protocol , XSUAA Port 
XSservice> Client ID, Client Secret 
Credentials > UserPassword 
Certificate Group >Verify HostnameKeystore File PathKeystore Password 
Retry Options > Optional 
Click to verify that the connection to the SAP Hana XS Engine server works correctly. A confirmation message is displayed when the connection is established.  
  • Action Tab 
 
In Action tab specify the job name that is scheduled in the SAP Hana XS Engine server. Click the Search button to choose the job name. 
Select an appropriate action from the drop-down menu. 
The available options are: 
  • Activate all schedule: Select this option to activate all the schedules. 
  • Deactivate all schedule: Select this option to deactivate all the schedules. 
Submitting your job: 
​  
It is  time to Submit your job into the current plan. You can add your job to the job stream that automates your business process flow. Select the action menu in the top-left corner of the job definition panel and click on Submit Job into Current Plan. A confirmation message is  displayed,  and you can switch to the Monitoring view to see  what is  going on.  
  • Monitor Page 
  • Job Log 
Are you curious to try out the SAPHanaXs plugin? Download the integrations from the Automation Hub and get started or drop a line at santhoshkumar.kumar@hcl.com. ​

Author's BIO
Dharani Ramalingam -Senior Java Developer at HCL Technologies 
 
Works as a Plugin Developer in Workload Automation. Technology enthusiast who loves to learn new tools and technologies. Acquired skills on Java, Spring, Spring Boot, Microservices, ReactJS,  NodeJS, JavaScript, Hibernate. 
Arka Mukherjee, Quality Analyst at HCL Technologies

Working as Quality Analyst for the Workload Automation team in HCL Software, Bangalore. Worked both in manual and automation test scenarios across various domains ​

Workload Automation 9.5:  How to survive disasters

0
0
If you want to avoid a potential business disruption in your Workload Automation environment, you should leverage the Master/Backup Master configuration. But, what happens if the RDBMS connected to Workload Automation crashes? 

In this article, we will describe how to manage both Workload Automation components and DB2 HADR to allow business continuity during a disaster event. 
Scenario 

To avoid possible disasters in a Workload Automation production environment, you must configure your environment in high availability. 

In Figure 1., you see a Workload Automation environment with both Master and Backup Master configured with DB2 HADR. 
In following sections, we will describe: 
  • How to set up the DB2 HADR for Workload Automation 
  • How to configure WebSphere Liberty to manage DB2 HADR 
  • How to recover from disaster 
  • How to troubleshoot DB2 HADR issues

How to set up DB2 HADR for Workload Automation 
This configuration is composed of two nodes (MyMaster and MyBackup) on which are installed all Workload Automation components (MDM on MyMaster and BKM on MyBackup) with their own DB2 nodes configured in HADR. 
The DB2 HADR is composed of two nodes, one is the primary node that is active and a secondary node that is in standby mode synchronizing data with the primary node.  
 
DB2 HADR configuration 
To configure the Workload Automation database in HADR we’ve to setup DB2 as below on both nodes. 
In the following commands, TWS is the database name. 
 
Setup database properties: 

1. 
The first configuration is about DB alternate server name and port, on both nodes:  db2 update alternate server for database TWS using hostname <other machine> port <db_port> 

2. 
Now we’ve to set all DB HADR properties on both nodes:  db2 update db cfg for TWS using HADR_LOCAL_HOST <mymaster|mybackup> 
This parameter specifies the hostname of the local database  
 
db2 update db cfg for TWS using HADR_REMOTE_HOST <mymaster|mybackup> 
This parameter specifies the hostname of the remote database 
 
db2 update db cfg for TWS using HADR_LOCAL_SVC <local service name> 
This parameter specifies the local DB2 service name 
 
db2 update db cfg for TWS using HADR_REMOTE_SVC <remote service name> 
This parameter specifies the remote DB2 service name 
 
db2 update db cfg for TWS using HADR_REMOTE_INST <remote instance name> 
This parameter specifies the remote instance name 
 
db2 update db cfg for TWS using HADR_TIMEOUT <peer timeout> 
This parameter specifies how after much time DB2 will consider a node as offline 
 
db2 update db cfg for TWS using HADR_TARGET_LIST <peer nodes list> 
This parameter specifies the list of HADR nodes to lookup 
 
db2 update db cfg for TWS using HADR_SYNCMODE <sync mode> 
This parameter specifies the transaction logs synch mode. This parameter should set depending on various factors like network speed between nodes. To refer to the IBM Info Center for the detailed explanation, please click here: https://www.ibm.com/support/knowledgecenter/SSEPGG_9.5.0/com.ibm.db2.luw.admin.config.doc/doc/r0011445.html 
 
db2 update db cfg for TWS using HADR_REPLAY_DELAY <delay limit> 
This parameter specifies the number of seconds that must pass from the time that a transaction is committed on the primary database to the time that the transaction is committed on the standby database. 
 
Start HADR on both nodes 
Now that HADR is configured, we have to start it using a fixed order: first the standby node and then the primary one: 
On MyBackup issue the following command: 
db2 start hadr on db TWS as standby 
 On MyMaster issue the following command: 
db2 start hadr on db TWS as primary 
 
How to configure WebSphere Liberty to manage DB2 HADR 
After configured DB2 in HADR, we have to configure the TWS datasource of Liberty in order to point to HADR instead of single DB node. 
So, Liberty, also if doesn’t know where DB is physically active, is able to reach TWS database.  
To configure TWS datasource properties edit the file <TWA_HOME>/<DATADIR>/usr/servers/engineServer/configDropins/overrides/datasource.xml: 
  • Add the highlighted parameters to the properties section: 
<properties.db2.jcc 
serverName="MyMaster" 
portNumber="50003" 
databaseName="TWS" 
user="db2inst1" 
password="="{xor}xxxxxxxxxxxxxxxxxx 
clientRerouteAlternateServerName="MyBackup" 
clientRerouteAlternatePortNumber="50003" 
retryIntervalForClientReroute="3000" 
maxRetriesForClientReroute="100" 
/> 
  • The new configuration will be automatically reloaded.  
An example of the entire datasource.xml file using variables is: 
<server description="datasourceDefDB2"> 
<variable name="db.driverType" value="4"/> 
<variable name="db.serverName" value="MyMaster"/> 
<variable name="db.portNumber" value="50003"/> 
<variable name="db.databaseName" value="TWS"/> 
<variable name="db.user" value="db2inst1"/> 
<variable name="db.password" value="{xor}xxxxxxxxxxxxxxxxxx"/> 
<variable name="db.driver.path" value="/opt/wa/TWS/jdbcdrivers/db2"/> 
<variable name="db.sslConnection" value="true"/> 
<variable name="db.clientRerouteAlternateServerName" value="MyBackup"/> 
<variable name="db.clientRerouteAlternatePortNumber" value="50003"/> 
<variable name="db.retryIntervalForClientReroute" value="3000"/> 
<variable name="db.maxRetriesForClientReroute" value="100"/> 
 
<jndiEntry value="DB2" jndiName="db.type" /> 
 
<jndiEntry value="jdbc:db2://${db.serverName}:${db.portNumber}/${db.databaseName}" jndiName="db.url"/> 
<jndiEntry value="${db.user}" jndiName="db.user"/> 
 
<!--  DB2 DRIVER jars Path -> db2jcc4.jar db2jcc_license_cisu.jar --> 
<library id="DBDriverLibs"> 
<fileset dir="${db.driver.path}" includes="*" scanInterval="5s"/> 
</library> 
 
<dataSource id="db2" jndiName="jdbc/twsdbstatementCacheSize="400" isolationLevel="TRANSACTION_READ_COMMITTED"> 
<jdbcDriver libraryRef="DBDriverLibs"/> 
<connectionManager connectionTimeout="180s" maxPoolSize="300" minPoolSize="0" reapTime="180s" purgePolicy="EntirePool"/> 
<properties.db2.jcc 
driverType="${db.driverType}" 
serverName="${db.serverName}" 
portNumber="${db.portNumber}" 
sslConnection="${db.sslConnection}" 
databaseName="${db.databaseName}" 
user="${db.user}" 
password="${db.password}" 
clientRerouteAlternateServerName="${db.clientRerouteAlternateServerName}"clientRerouteAlternatePortNumber="${db.clientRerouteAlternatePortNumber}" 
retryIntervalForClientReroute="${db.retryIntervalForClientReroute}" 
maxRetriesForClientReroute="${db.maxRetriesForClientReroute}" 
/> 
</dataSource> 
</server> 
 
How to recover from disaster 
To recover from a disaster scenario, for example if the primary node crashes, we can leverage multi node environment to allow business continuity. 

Follow these steps to recover the Workload Automation environment. 

Takeover database on standby node
 
We’ve to “takeover” the database on the secondary node. 
On MyBackup issue the following command: 
db2 takeover hadr on db TWS 
 
Switch Workload Automation components 
After the database switch to secondary node we have to switch also all Workload Automation components. 
  • Export the master workstation definition into the file ‘file1’: 
composer create file1 from ws=S_MDM 
where S_MDM is the backup master workstation. 
  • Export the backup workstation definition into the file ‘file2’: 
composer create file2 from ws=S_BKM 
where S_BKM is the backup master workstation. 
  • From the both files, create two new files (‘file3’ and ‘file4’). For example, using the sed Linux command: 
sed 's/MANAGER/fta/Ig' < file1 > file3 
sed 's/fta/MANAGER/Ig' < file2 > file4 
  • Switch the event processor from master to backup master: 
conman "switchevtproc S_BKM" 
  • Switch the manager from master to backup master: 
conman "switchmgr masterdm;S_BKM" 
  • Switch the Broker application from master to backup master. 
On master node: 
<TWA_HOME>/wastools/stopBrokerApplication.sh 
On backup master node: 
<TWA_HOME>/wastools/startBrokerApplication.sh 
  • Import the new workstation definitions to make permanent the switch between MASTER and FTA (backup master): 
composer replace file3 
composer replace file4 
 
Now both middleware and Workload Automation components are on MyBackup machine and we could continue to work on this secondary node without any disruption.
Troubleshooting 
 
How to check HADR health 
To check the HADR status issue the following command on both nodes: 
db2pd –hadr –db TWS 

where TWS is the database name.
 

​Here an example of output of the command on the primary node:
 
This picture shows the status of HADR on primary node: highlighted parameters are the ones that describes the HADR healthy: 
  • HADR_ROLE: on primary node must be PRIMARY (STANDBY on secondary node) 
  • HADR_STATE: must be PEER 
  • HADR_CONNECT_STATUS: must be CONNECTED 
  • LOG_TIME parameters describes the latest transaction log on all nodes: the date and time must be synchronized and up-to-date 
Below an example of output of the command on the secondary node: 
How to fix HADR issues 
If one of parameters described in the previous section is not in expected state, it means that HADR is not working fine and immediate action should be performed. 

Let try to understand common errors and recovery actions that should be performed. 

First try to start HADR on the node that is not working fine: 

db2 start hadr on db TWS as standby|primary 

If after some minutes the wrong status does not change it means that HADR is broken. 
Probably the node on which HADR is not working has either database corruption or a missing/corrupt transaction log, so the strategy to recover is: 

1. 
Takeover HADR on the working node: 
    
db2 takeover hadr on db TWS 
2. 
Backup online the database on the working node:  mkdir /tmp/TWS_backup 
db2 "backup db TWS ONLINE to /tmp/TWS_backup INCLUDE LOGS" 
3. Copy the TWS_backup on the corrupted node 
4. 
Drop and Restore the database on the corrupted node: 
db2 drop db TWS 
db2 "restore db TWS from /tmp/TWS_backup " 
5. 
Reconfigure and start HADR on the working node (for example MyBackup): 
db2 "update alternate server for db TWS using hostname MyMaster port 50003" 
db2 "update db cfg for TWS using HADR_LOCAL_HOST MyBackup" 
db2 "update db cfg for TWS using HADR_REMOTE_HOST MyMaster" 
db2 "update db cfg for TWS using HADR_TARGET_LIST MyMaster:DB2_HADR_TWS" 
db2 "start hadr on db TWS as standby" 
 6. Takeover HADR on the corrupted node (in our example MyMaster):  db2 takeover hadr on db MyMaster 
 
Conclusion 
This article provides a simple way to add a high availability capability to your Workload Automation environment to avoid possible disasters both on middleware or database side. 
Do not hesitate to contact us for any questions.  
 
References 
If your Workload Automation version is a 9.4 or previous, you can refer to this post: 
http://www.workloadautomation-community.com/blogs/workload-automation-how-to-survive-disasters# 

Author's BIO
Eliana Cerasaro, Technical Lead, HCL Technologies 

Eliana Cerasaro has worked in the Workload Automation area since 2006. In 2016, she moved from IBM to HCL Technologies and is currently part of the distributed development team of Workload Scheduler as a Technical Lead. She specializes in design and development of backend applications and databases. 

How to control the system where a z/OS job is going to be run

0
0
With the SPE of May Z Workload Scheduler (Zwsprovides the possibility to automatically insert into the submitted JCL customized SYSAFF and CLASS JOB card keywords. 

​All 
z/OS users know very well that SYSAFF card is used to force the execution of a job on the indicated systems and that CLASS assigns specific characteristics to the job, like holding, time and even return code handling logic (JOBRC). 

The point is: 
why this is important for Zws users?  
In this blog we will concentrate on the SYSAFF replacement.  

So, the first answer is:  

To handle 
easily the LPAR planned shutdown 

We know that in a JES environment 
the submission of a job on a specific system does not guarantee the execution on the
same system too
. 

We also know that if we 
must shut down an LPAR for maintenance we do not want to interrupt the execution of jobs or delay too much the shutdown to wait jobs completion.  
Most of all we know that manual checkand actions are subject to human errors and automatic handling of the scenario is the best thing for reducing costs and needed resources. 
In Zws we already had the SHUTDOWNPOLICY parameter to handle this scenario:  
IF SHUTDOWNPOLICY is set the Engine before to submit the jobs will consider also their estimated duration and, whenever the estimated end is beyond the workstation availability, the job is not submitted.  
But what happens if the job is submitted on LPAR that will not be shut down so that the SHUTDOWNPOLICY checks are passed but … then JES decides to execute the job on a different LPAR (due to scheduling environment availability for example) and this LPAR will be shut down in a few times? 
We need that the SHUTDOWPOLICY checks guarantee also the JOB execution phase.  
SPE of May provides a very easy way to solve this scenario. 

Three Simple Steps
What makes this new feature easy and usable is:  
  • The dynamic update of the new option values via modify command 
  • The possibility to display new option current values via modify command 
  • The immediate apply of the new values to all jobs submitted after the update 
  • The possibility to define the new option values before related workstation/destination are active in the plan: they will be ignored until the plan will include them.  

JUST AN EXAMPLE
: 

Let’s see how it works with an example.
 

​Suppose we have the following 
JES2 SYSPLEX with four LPAR: 
  • LPAR TVT5012 system name = S012tracker=TCZ1 destination=TCZ1A 
  • LPAR TVT5013 system name = S013tracker=TCZ2 destination=TCZ2A 
  • LPAR TVT5014 system name = S014tracker=TCZ3 destination=TCZ3A 
  • LPAR TVT5088   system name = S088tracker=TCZ8 destination=TCZ8A 
 On every LPAR a tracker is running.  

For each Tracker we have a destination defined in the Controller initial parameter ROUTOPTS 
In this example we are using TCP/IP connection and TCZ1A, TCZ2A, TCZ3A and TCZ8A are the destination names to be used on the associated workstation definitions. 
 
We have a Virtual workstation, named VIRT, that includes all the four LPARs: 
STEP 1  
We have planned a shutdown of LPAR identified by TCZ2A the 10th and the 11th of December: the system will not be available from 15.00 to 23.59.  We define appropriately the availability intervals of TCZ2A destination of Virtual workstation VIRT: 
STEP 2  
We define on initial parameter the SHUTDOWPOLICY parameter. The value 100 means that in the calculation of job estimated end within the SHUTDOWNPOLICY checks, we will consider the whole duration. 
STEP 3  
We define on JTOPTS initial parameter the new keyword WSSYSAFF as follows:
The format used is: 

WSSYSAFF(wsname:systemname.destination, … systemname.destination) 
 
If you do not want to stop Controller, you can add the values dynamically with the following modify command addressed to the Controller subsystem (TWSZ): 
/F TWSZ,AWSSYSAFF(VIRT:S012.TCZ1A)
 
/
TWSZ,AWSSYSAFF(VIRT:S013.TCZ2A) 

/
TWSZ,AWSSYSAFF(VIRT:S014.TCZ3A)

​/F 
TWSZ,AWSSYSAFF(VIRT:S088.TCZ8A)
Consider that with modify command you can add (AWSSYSAFF) or remove (RWSSYSAFF) new option values whenever you want.  
To maintain control of new options value you can use the modify command (DWSSYSAFF) to display current values: 
 
MODIFY COMMANDS TO CHANGE NEW OPTIONS INCREASE FEATURE USABILITY 
 
That’s all! 
 
Let us see now what happens if we submit the following JCL on VIRT workstation at 13.30.  
Job duration is 4 hours.  
This means that the job estimated end is at 17.30, within the unavailability range of TCZ2A destination. 
For SHUTDOWNPOLICY the job lasts too much to be completed in the open interval. 

What is new is that now the JCL is tailored to add the SYSAFF statement in the JOB card to exclude not only the submission but also the execution of the job on TCZ2A 
 
This is the job JCL saved in JS VSAM. No SYSAFF specified: 
This is the JOBLOG of the submitted JCL showing the adding of SYSAFF statement: 
What happened is that the SHUTDOWNPOLICY checks used the destinations specified in the WSSYSAFF statement for VIRT workstation to identify the availability intervals to be used. More in detail: 
  • VIRT WSSYSAFF options specifies four “sysname.destination” couples. 
  • For each couple the destination is used to locate in VIRT workstation the availability intervals while the system name is the value used for the add into SYSAFF: 
              - For S012.TCZ1A there no unavailability interval (VIRT-TCZ1A) 
                   S012 is added to SYSAFF 
              -  For S013.TCZ2the destination is not available from 15.00 to 23.59 (VIRT-TCZ2A) and the job estimated
                  end 
is within this interval: 
                   
S013 is NOT added to SYSAFF 
              - 
For S014.TCZ3there no unavailability interval (VIRT-TCZ3A) 
                   
S014 is added to SYSAFF 
              - 
For S088.TCZ8there no unavailability interval (VIRT-TCZ8A) 
                   
S088 is added to SYSAFF   

N
ote that the tailoring can be seen on job JOBLOG but it is not saved on Controller local repository (JS VSAM), because every tailoring is done ad hoc and related to the current situation at submission time: 
 
KEY POINTS OF JCL TAILORING  ​
This scenario involved a Virtual workstation and make it more flexible and effective: with WSSYSAFF  you can distributed the workload on different LPAR controlling also the execution when needed. Consider that WSSYSAFF can be used to force execution also for not Virtual workstations, having only one destination. The concept is the same. The planned shutdown is not the unique scenario that can be addressed by means of this new, feature. You could use it to group jobs per their characteristic and force execution on the most appropriate LPAR.  

For example you can use it:    

To execute more important jobs on LPAR more performant 

Suppose LPAR S088 is the most performant. 
The jobs that are most important should be defined on workstation HIGH including all four destinations  
The other jobs should be defined on Virtual workstation LOW including all four destinations 
We want to reserve LPAR S088 to the most important jobs. 
 
The WSSYSAFF statement should be: 
 
WSSYSAFF(HIGH:S088.TCZ8A, 
      LOW:S012.TCZ1A,S013.TCZ2A,S014.TCZ3A) 
 
 
  • All jobs defined on workstation HIGH will have SYSAFF=S088 
  • All the other jobs ON WORKSTATION low will have SYSAFF=(S012,S013,S014) 
 
In conclusion we can summarize the flow as follow: 

Author' BIO
Rossella Donadeo, HWAz Development Technical Leader, HCL

Graduated with a bachelor degree in mathematics in 1982. Technical leader of Z Workload Scheduler. She worked for a couple of years in a small software house, and then in 1984 joined IBM.  Since then she worked in level 3, development and verification. Since 1996 she has been focused on Workload Scheduler for z/OS product.  She successfully led the development of ZWS 950 and related SPE.  She is a Mindfulness instructor, fond of trekking, yoga, tai chi, cooking and practices Vipassana meditation. She is also a writer and had a book published. She loves to draw. 

How to send an email alert for errors with Workload Automation on z/OS

0
0
Workload Automation on z/OS (IBM Z Workload Scheduler or HCL Workload Automation for Z) allows users to centrally control all the automation processes providing capabilities to facilitate the operations for mainframe users.  

You can have the Workload Automation sending an email to a recipient or list of recipients when an alert condition occurs.  
When WA decides to send an email? 
 
You can set that the emails are sent for the following alert conditions: 
 
For operations: 
  • An operation in the current plan is active for an unexpectedly long time. 
  • An operation in the current plan is set to ended-in-error status. 
  • An operation in the current plan becomes late, which means that it reaches its latest start time and does not have the status started, complete, or deleted. 
  • An operation in the current plan is promoted by WLM (z/OS Workload Manager)  
  • The time that an operation is waiting to allocate a given resource exceeds the specified time.  
 
For product subtasks: 
  • A Workload Automation subtask or subsystem ends unexpectedly. 
 
You can then decide whether send an email to an address group basing on a rule (FILTER) that checks whether an expression is satisfied or not. 
The email will be sent an email if all the expressions in the following rule are satisfied:  
 
FILTER(expression1, expression2, ..., expressionn 

How you can define the email content and the email recipients addresses? 
 
Providing the following rule: 
 
FILTER(expression1, expression2, ..., expressionN) 
HEADER(FROM: recipient_address 
                TO: recipient_address1, ..., recipient_addressN 
                CC: recipient_address1, ..., recipient_addressN 
                BCC: recipient_address1, ..., recipient_addressN 
                SUBJECT: subject_text) 
TEXTMEMBER(member_name) 
 

If expression1 AND expression2 AND …. AND expressionN are satisfied then email with subject «subject_text» and text contained in «member_name» file will be sent to the recipents addresses provided in the parameters TO: CC: e BCC:. 
 
 
An Example: 
 
I can decide whether to send the email with content contained in the file «text» to a group of operators if a job ended in error, the application name must be xxx* (* is a wild card) and the job name must be yyy*. 
In this case the rule will be: 
 
FILTER(&ALERCOND=ERROROPER, &OADID=xxx*, &OJOBNAME=yyy*) 
HEADER(FROM: wa.scheduler@company.com 
                TO: john.smith@external.compaul.red@internal.com 
                CC: federic.white@company.com 
                BCC: edgar.gree@company.com 
                SUBJECT: job &OJOBNAME terminate with error code: &OERRCODE) 
 TEXTMEMBER(text)   
 
Where &ALERCOND&OADID&OJOBNAME and &OERRCODE are variables containing the current values (&ALERCOND is the alert condition&OADID is the application name&OJOBNAME is the job name and &OERRCODE is the error code). 
 
Prerequisite to use this functionality 
To activate this function, you need to ensure that you have started the Communications Server Simple Mail Transfer Protocol (CSSMTP) on a z/OS system that runs on the same JESPLEX where Workload Automation runs. 

Author's BIO
Paolo Falsi, Technical Lead, HCL Technologies

Started working for IBM in 1990 at the IBM Software Laboratory in Rome. He started his career in the development teams of IBM OSI products. During these years, he has held various roles in the products development lifecycle but mainly as developer and designer of solutions in IBM Automation products. After 5 year working in the IBM BigFix family products with expert engineer role, he changed company in 2017 embracing the HCL company.  In HCL company he has a Technical leader role in the Workload Automation products suite. He holds a master's degree in Mathematics.   

Welcome to the new Era of Dashboards!

0
0
Keeping track of all the changes and workflows and be aware of unexpected behaviors can be challenging in complex Workload Automation environments, but don’t worry, we’ve got you covered! 

We are proud to announce you the new Era of Dashboards! 
Basing our work on customers feedbacks, requests and inputs collected thought the years and leveraging on the new Dynamic Workload Console 9.5 infrastructure, we designed and built a brand-new dashboard system which is: 
  1. easy to configure  
  2. easy to use 
  3. easy to understand! 
 
We increased the customization options and decreased complexity! 

Creation of new dashboards from scratch has never been so easy, with possibility to include multiple type of data with even more filters than beforecombining also integration between Workload Automation data and external sources of data, in order to have all the information you need in a single view, at a glance. 

Brand-new dashboard infrastructure built with state-of-the-art technology 

The Dynamic Workload Console has evolved with a new graphical layout, features and improved functionality.
 It has undergone both an architectural and web redesign. 

The interface is based on new architectural foundation of modern front-end technologies while maintaining current workload logic and processes. With this refurbishment, Dashboard Application Services Hub (DASH) is replaced by a lean, high-performance in-house solution that is now based on a lightweight, highly composable, fast to start, dynamic application server runtime environment, WebSphere Application Server Liberty. The user interface infrastructure is based on modern front-end technologies such as ReactJS, Redux, React-Saga, and SASS. 

The streamlined design of the console accommodates 
different features that improve the overall user experience: 
a new live dashboard experience enables smart troubleshooting use cases for proactive incident management; 
fully customizable and sharable dashboards to make real-time data-driven decisions and keep control of your entire business using the full set of information and data-sources of Workload Automation or data coming from RESTful services. 

Simplified customization & 
redesigned experience 

Maintaining and keeping control of a scheduling environment can be challenging. The right solution
 to monitor different machines and environments, is to have everything in one place; the Dynamic Workload Console with the new dashboarding system will serve you best! It has a completely renewed look and feel and usage experience to become nearer to your needs. 

Dynamic Workload Console provides two new completely redesigned default dashboards:
 

D Workload dashboard: data retrieval available are from distributed engines, showing information regarding jobs status, workstations status, critical jobs status and much more. 
Z Workload dashboardonly z/OS engines listed. Data shown in widgets are related only to z/OS engines. 
Engine selection 

No data is retrieved at Workload dashboard startupThe retrieval is based on the user selection through the engine list (single selection from GA to Fix Pack 1 version).
 
Starting from the Fix pack 1, opening a board is possible to choose an engine and save the selection for the next access to the board. 

The multiple selection has been introduced in the last version. Now it is possible to retrieve data from multiple engines. In case of problems through the tooltip on the widget header you can get information about unavailable engines. 
The drilldown functionality has been enabled automatically for all Plan Query datasources and the default REST API (Job count by status and Critical jobs) connected to KPI, Bar chart, Gauge, Pie chart and Bubble ChartIt is triggered by clicking on the specific widget. 

In case of multiple engine selection, you can drilldown and monitor only Job and Jobstream (for any other object the single selection is required). 
A duplicate action is available to allow customization of predefined dashboards. Switching to “Edit mode” you can modify widgets properties or resize and rearrange them freely within the custom board. 

The creation of widgets is
 also simplified and completely guided, selecting a datasource the type of widget is suggested. You can filter the data to show using the section tracked properties and set a threshold (KPI and Gauge).  

Once a widget is created you could also modify it and select another datasource from those suggested. 

Complete redesign of widgets 

A new Bar chart and Pie chart interaction has been added making possible to hide statuses on the fly. By clicking on the status legend, you can hide several statuses from charts and those are automatically resized based only on the data to be shown. 
Furthermore, several types of widget have been made available 
Line Chart, that allows to track a series of data and keep control of any changes 
- Bubble Chartrepresents data with a series of bubbles in descending order and size  
Web content that allows you to wrap any web pages in your dashboard 
Text Editor useful to have notes at your fingertips. 

New 
datasource system available 

New infrastructure allows users to create their own source of data, specifying any type of object and any type of filter.  

A s
et of predefined datasources, used by Workload dashboard and Z Workload dashboard, is already available and can be duplicated and used as templates for creating new datasources. 

T
here are two types of datasource which can be used: Plan query and Rest API. 

​Monitor Workload-like experience 
has been introduced in order to create easily a Plan query” datasource with the same options from the monitoring. Furthermore, you can keep control of “external” data coming from RESTful services. 

From the Fix Pack 2 you can create Plan query 
datasources using all the filters if you select more than one engine on condition that they are homogeneous. 
For an overview on the Workload Dashboard, have a look this video https://www.youtube.com/watch?v=4l5W5eGPUBs. 

Want to learn more or having questions about dashboards?
 Visit the Workload Automation website  and  don’t hesitate to contact us with an email at gabriele.barboni@hcl.com and ​elvira.zanin@hcl.com! 

Author's BIO
Gabriele Barboni, Technical Specialist, HCL Technologies 

Gabriele Barboni is an enthusiastic full stack engineer, specialized in front-end 
development and he has a background in computer science technologies. He joined Workload Automation family in 2013, where he focuses on the Dynamic Workload Console. Gabriele has worked intensely on the dashboards projects as well as on other cool functionalities that are available in Dynamic Workload Console. He is passionate about his job, loves to travel around the world, visit new places and meet new people. He is also a TV series addicted and a sportsperson. 
Elvira Zanin, Software Engineer, HCL Technologies

Elvira Zanin is a Software Engineer on the Workload Automation development team located in HCL Rome Hub. She is currently based in the WebUI development team but she was involved in Test automation team and Add-ons development team also. Elvira has experience with Dynamic Workload Console. She completed her degree in Computer Science at University of Salerno and currently lives in Rome, Italy.
Viewing all 148 articles
Browse latest View live




Latest Images