tutorial playbook exercise instructions...4. select the “overview” tab. click “add/edit...

39
Tutorial Playbook Exercise Instructions Campaign targeting with TIBCO Data Science TIBCO Data Science is a unified Data Science offering that allows organizations to expand data science deployments across the organization by providing flexible authoring and deployment capabilities. Its Team Studio authoring environment is an enterprise web-based platform for business units, data engineering and data science teams to collaborate on projects. Data engineers and data scientists build data and machine learning pipelines in visual workflows, which run in Spark, Hadoop and databases. In this exercise, we will give you an orientation of Team Studio. You will then work on a marketing project for smartphone promotions to replicate a typical data science workflow. You will build data and machine learning pipelines to find suitable customers to target for an offer on the SmartPhone J7 phone model. Last but not least, you will schedule a batch job to find customers to target from new customer records. Table of Contents Part 1: Orientation in TIBCO Data Science Team Studio 1 Part 2: Prepare dataset for predictive modeling 4 Part 3: Create a predictive model 25 About ROC Curves 30 About Confusion Matrices 31 Part 4: Score new data on a schedule 34 Part 1: Orientation in TIBCO Data Science Team Studio In this part of the hands-on lab, we will orientate you in Team Studio. 1. Log in to TIBCO Data Science Team Studio (http://partnerstsdemo.tibco.com) using the credentials that were provided to you. Page 1 of 39

Upload: others

Post on 14-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

Tutorial Playbook Exercise Instructions Campaign targeting with TIBCO Data Science TIBCO Data Science is a unified Data Science offering that allows organizations to expand data science deployments across the organization by providing flexible authoring and deployment capabilities. Its Team Studio authoring environment is an enterprise web-based platform for business units, data engineering and data science teams to collaborate on projects. Data engineers and data scientists build data and machine learning pipelines in visual workflows, which run in Spark, Hadoop and databases. In this exercise, we will give you an orientation of Team Studio. You will then work on a marketing project for smartphone promotions to replicate a typical data science workflow. You will build data and machine learning pipelines to find suitable customers to target for an offer on the SmartPhone J7 phone model. Last but not least, you will schedule a batch job to find customers to target from new customer records.

Table of Contents Part 1: Orientation in TIBCO Data Science Team Studio 1

Part 2: Prepare dataset for predictive modeling 4

Part 3: Create a predictive model 25 About ROC Curves 30 About Confusion Matrices 31

Part 4: Score new data on a schedule 34

Part 1: Orientation in TIBCO Data Science Team Studio In this part of the hands-on lab, we will orientate you in Team Studio.

1. Log in to TIBCO Data Science Team Studio (http://partnerstsdemo.tibco.com) using the credentials that were provided to you.

Page 1 of 39

Page 2: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

2. Click on the hamburger menu on the top left and select Workspaces. Then create a non-public Workspace named “[Your name] SmartPhone Campaign Targeting”.

3. Inspect the tabs in your workspace. Workspaces in TIBCO Data Science Team Studio all have these tabs.

Page 2 of 39

Page 3: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”.

5. Under the “Overview” tab, select “Add or Edit Members”. Add your fellow trainees as members of your workspace.

Page 3 of 39

Page 4: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

6. Select the “Milestones” tab. Click “Create New Milestone” and create four project milestones within your workspace with target dates.

○ 1. Data Blending

○ 2. Model Building

○ 3. Model Scoring

○ 4. Applying Model to New Dataset

Part 2: Prepare dataset for predictive modeling In this part, you are going to blend customer data from different sources and create new variables to prepare a dataset that can be used to train a predictive model. You will be working with order data for the phone on promotion and customer profile data, both stored on a database. In addition, you are given a search log that has the terms customers used to search for products on the company’s website. This search log resides on a Hadoop datasource.

Page 4 of 39

Page 5: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

It is worth mentioning that all operations on the data are performed in-datasource – either in the database itself, or within Hadoop and Spark. Data is not moved to Team Studio. This design makes it possible for analytics to be run over very large datasets without the end-user having to know Spark or SQL.

1. In your workspace, go to the “Work Files” tab, click the “Create New Workflow” action to create a new workflow with both the “Hadoop” and “Database” (use the “partnerstsdemo” database) data sources and name it “1. Data preparation”.

2. Open the workflow. On the left side bar, there are two tabs – “Operators” and “Data”. Clicking on each one brings up the Operators Explorer and Data Explorer respectively.

a. The Operators Explorer lets you filter and search through a list of available operators. Operators are units of computation, such as algorithms or data transformation tasks. To use an operator in your workflow, drag it over to the workflow canvas.

Page 5 of 39

Page 6: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

b. The Data Explorer shows the data sources connected to the opened workflow. You can navigate the folders to select data to use in your workflow. To use a dataset in your workflow, drag it over to the workflow canvas.

3. Click on the “Data” tab to get to the Data Explorer, click on “Data Sources” and select the “partnerstsdemo” database. Navigate to “public” and drag and drop the “Telco_Customers” and “Telco_Orders” datasets.

4. Double click on the newly created nodes, and change their titles to “Customers” and “Orders” respectively.

Page 6 of 39

Page 7: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

5. Click on “Results” at the bottom left of the screen to bring up the Results Console. This is where you can see the status of a running workflow when no operator is selected on the canvas. When an operator is selected, the Results Console shows the output data of the operator after it has been run. Your Results Console should be empty now.

Page 7 of 39

Page 8: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

6. Right click the “Customers” and “Orders” datasets and select Step Run. Inspect the content of the data. What information does each table contain? What are the products in the “Orders” dataset?

Page 8 of 39

Page 9: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

7. Click on the “Operators” tab (on the left sidebar) to get to the Operators Explorer. Find the Aggregation operator by typing the operator name in the filter. Attach it to the “Orders” dataset. Double click on it to configure its properties:

a. Name it “Aggregate quantity bought”.

b. Click “Define Aggregations”.

c. Have the aggregation be grouped by “customer_id”.

d. Set up an Aggregation Formula as follows:

i. Aggregation Type: Sum

ii. Column to aggregate: item_quantity

iii. Data Type: BIGINT

iv. Result Column: quantity_bought

Note that when an operator is first placed on the canvas, its name is in red. This indicates an error, since the operator has not been configured yet. It should no longer be red when the operator is properly configured.

Note also that when a new operator’s property dialog is opened, some buttons may be highlighted in yellow. This tells you that the yellow button(s) should be clicked to configure the operator’s properties. When the configuration is done, the yellow highlight on the button(s) disappear.

Page 9 of 39

Page 10: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

8. In the “Operators” tab find the Join operator. Attach it first to the “Customers” dataset and then to the “Aggregate quantity bought” dataset. Double click on it to open its property dialog. Click on the “Define Join Conditions” button. Configure it as follows:

a. Set the aliases of the input tables to “customers” and “aggregate” respectively.

b. Select All columns for the “customers” Input Table and the column “quantity_bought” for the “aggregate” Input Table.

Page 10 of 39

Page 11: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

c. Define the following Join Condition:

i. Left Table: customers

ii. Join Type: Left Join

iii. Right Table: aggregate

iv. Column1: customers.customer_id

v. Condition: “=”

vi. Column2: aggregate.customer_id

vii. Click “Create”

Page 11 of 39

Page 12: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

Page 12 of 39

Page 13: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

The Join Properties dialog should resemble something like the following when you are done.

9. In the “Operators” tab find the Copy to Hadoop operator, attach it to the “Join” operator and configure it as follows:

a. Copy to “Hadoop”.

b. If file exists: Drop

c. Copy mode: Simple

d. Keep default for all other parameters.

10. In the “Data” tab, click “Data Sources” to view the available data sources. This time, click on “Hadoop”, then click on “public_datasets” . This is the folder where we store the files of this hands-on lab. Find the “Telco_SearchLog.csv” dataset.

Page 13 of 39

Page 14: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

11. Drag the dataset onto the canvas and rename it to ‘Searchlog.csv’. Double click on it to open its property dialog. Click on the “Hadoop File Structure” button to inspect its column names and types. Then click “OK” to validate the file structure. Click “OK” to save and close the property dialog.

12. Right click the “SearchLog.csv” dataset and select Step Run. Inspect its content after the Step Run is complete.

13. Attach a Row Filter operator to “SearchLog”. Name it “SmartPhone J7 searches”. Open its properties, click “Define Filter” and configure it as follows:

a. Column Name: search_term

b. Condition: contains

c. Value: “J7” (with double quotes)

Page 14 of 39

Page 15: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

14. Attach an Aggregation operator to “SmartPhone J7 searches”. Double click on it to configure its properties:

a. Name it “Aggregate search count”.

b. Click “Define Aggregations”.

c. Have the aggregation be grouped by customerID.

Page 15 of 39

Page 16: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

d. Set up an Aggregation Formula as follows:

i. Aggregation Type: Count

ii. Column to aggregate: customerID

iii. Result Column: smartphoneJ7_searchCount

Page 16 of 39

Page 17: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

15. Drag and drop a Join operator and attach it first to “Copy to Hadoop” and then to “Aggregate search count”. Call it “Join-2”. Double click on it to open its property dialog, then click on the “Define Join Conditions” button and configure it as follows:

a. Define customer_id = customerID as the join condition.

b. Under Join Type, check “Include all records from [Copy to Hadoop] even if no matching records from other file found.”

c. Select All columns for the “Copy to Hadoop” Input Table and the column “smartphoneJ7_searchCount” for the “Aggregate search count” Input Table.

d. Click “OK” to return to the operator’s property dialog and select the “true” radio button for Store Results.

Page 17 of 39

Page 18: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

Page 18 of 39

Page 19: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

The Define Join Conditions dialog should resemble something like the following when you are done.

Page 19 of 39

Page 20: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

16. Add a Summary Statistics operator and attach it to the latest Join operator. Double click on it to open its property dialog. Click on the “Select Columns” button and select “All” columns.

17. Right-click on the Summary Statistics operator and select Step Run to execute it. Summary Statistics gives you useful information on the selected variables, such as data type, row count, number of unique values, null values, min/max etc. Inspect the results in the Results Console and verify that there are missing values in the “smartphoneJ7_searchCount” and “quantity_bought” columns.

18. Attach a Null Value Replacement operator to the latest Join operator (not the Summary Statistics operator). In the property dialog, click on the “Replace Null Columns” button. Select the “quantity_bought” and “smartphoneJ7_searchCount” columns and replace their null values with 0’s. Click OK to return to the property dialog and select the “true” radio button for Store Results.

Page 20 of 39

Page 21: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

Page 21 of 39

Page 22: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

19. Now, we will define the variable to predict. Attach a Variable operator to “Replace null value”. Name it “Add column to predict”. Open its properties, click “Define Variables” and define the following variable:

a. Variable Name: HasBought

b. Data Type: chararray

c. SparkSQL Expression

CASE

WHEN quantity_bought = 0 THEN "No"

ELSE "Yes"

END

Page 22 of 39

Page 23: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

d. Click OK, then click “Select Columns” and check all the columns except “quantity_bought”.

e. Click OK and select the “true” radio button for Store Results.

f. Click “Choose File”, leave the path name to default.

g. In “Results Name”, enter “tutorial_prepared_data”.

Page 23 of 39

Page 24: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

20. Run the entire workflow by clicking on “Run”. If you have been using “Step Run” before, the operators that have been run will not be run again. When the workflow finishes running, the dataset prepared by this flow will be saved to the path specified then “tutorial_prepared_data”, which you specified in the workflow’s final operator.

21. Click on “Save” to save your workflow. It is good practice to always enter comments on what has changed each time you save a workflow. Team Studio keeps a version of every save of a workflow. You may restore an earlier saved version at any time.

At the end of this step, your workflow should resemble the following diagram.

Page 24 of 39

Page 25: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

Part 3: Create a predictive model Having prepared the dataset from historical data in the previous part, you are now ready to use it to train a predictive model that can be used to predict which new customers are likely to buy when offered the SmartPhone J7 promotion. Data scientists typically split the historical dataset into 2 subsets. They use one subset for the modeling algorithm to learn from the data (in other words, “train the model”), and the other subset to evaluate how the model performs.

1. In your workspace, go to the “Work Files” tab, click the “Create New Workflow” action to create a new workflow with “Hadoop” as a data source and name it “2. Train targeting model”.

2. With the workflow opened, click on the “Data” tab to get to the Data Explorer and find the “/tmp/partnerstsdemo_out/<your_user_name>/DataPreparation_<your_workfile_number>/tutorial_prepared_data/” under the “Hadoop datasource.

3. Drag the whole folder onto the canvas. Right click on it, select Step Run and verify that it produces the table that you generated in the previous part by inspecting the Results Console.

Page 25 of 39

Page 26: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

4. Attach a Random Sampling operator to your dataset and configure it as follows:

a. Number of samples: 2

b. Sample by: Percentage

c. Click the “Define Sample Size” button, then enter 70 and 30 for sample sizes.

d. Keep the default settings for the other parameters.

5. Drag and drop two Sample Selector operators and attach them to the Random Sampling operator. Configure them so that one carries the 70% sample and the other the 30% sample. Rename them respectively to “Training Set” and “Testing Set”.

Page 26 of 39

Page 27: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

6. Attach a Logistic Regression operator to the “Training Set” and configure it:

a. Name: Logistic Regression - without search

b. Dependent Column: Select “HasBought”

c. Maximum Number of Iterations: 100

d. Click on the “Select Columns” button and select all columns except “customer_id”, “city”, “state_code”, “country” and “smartPhoneJ7_searchCount” (scroll to the bottom).

e. Keep the default settings for the other parameters.

Page 27 of 39

Page 28: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

7. Copy and paste the “Logistic Regression - without search” operator onto the canvas, attach it to the “Training Set” and configure it:

a. Name: Logistic Regression - with search. This way we can see if customers who previously searched for the smartphone will affect the performance of the model.

b. Click on “Select Columns” and add “smartPhoneJ7_searchCount” to the selection. Keep the other selections the same.

Page 28 of 39

Page 29: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

8. Drag and drop an ROC operator to evaluate the two Logistic Regression models on the “Testing Set”.

a. Connect an arrow from each of the two Logistic Regression models to the ROC operator.

b. Connect an arrow from the “Testing Set” to the ROC operator.

c. Configure the ROC operator by setting “Value to Predict” to “Yes” (do not use quotes).

Page 29 of 39

Page 30: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

About ROC Curves

The ROC curve is a tool used by data scientists to evaluate models that predict a Yes/No answer. Each curve plots the true positive rate (also called sensitivity) against the false positive rate of one model. The two axes can only have maximum values of 1. The ROC curve for a model that uses random guessing is a diagonal line from (0, 0) to (1, 1). When comparing two models, the model that has an ROC curve above the other one is the better model.

The area under an ROC curve (called AUC, or area under the curve) is a quantity used to compare models. The larger the AUC, the better the model. The AUC for any model can never exceed 1 and the AUC for a model using random guessing is 0.5.

Page 30 of 39

Page 31: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

9. Drag and drop a Confusion Matrix operator to evaluate the two Logistic Regression models on the Testing Set.

a. Connect an arrow from each of the two Logistic Regression models to the Confusion Matrix operator.

b. Connect an arrow from the Testing Set to the Confusion Matrix operator.

10. Click on “Run” in the upper toolbar to run the workflow. Inspect the results of the ROC and Confusion Matrix operators. How do the models compare?

About Confusion Matrices

The Confusion Matrix is another tool used by data scientists to evaluate models that predict categorical answers. It compares the predicted values against the actual values by tabulating the number of records that are correctly and wrongly predicted.

11. In our exercise, “Logistic Regression - with search” produces a better model. We will export this model to make predictions on new customers. Attach an Export operator to “Logistic Regression - with search” and configure it:

a. Export format: Analytics Model Format

b. FIle name: SmartPhoneJ7_targeting_tutorial

Page 31 of 39

Page 32: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

12. Right click on the Export operator and select Step Run.

13. Save your workflow with a comment on what you just created.

Page 32 of 39

Page 33: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

At the end of this step, your workflow should resemble the following diagram.

Page 33 of 39

Page 34: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

Part 4: Score new data on a schedule Having trained and exported a predictive model, you can now use this model to make predictions on new customers to decide whether or not to offer them the SmartPhone J7 promotion. Applying a model to make predictions on new data is referred to as “scoring”.

You will create a workflow to score customers and create a job to run this workflow on a schedule. A dataset with customer records has been prepared for you in this lab. In a real application, the customers to be scored come from fresh data.

1. In your workspace, go to the “Work Files” tab. Check that you have a file named “SmartPhoneJ7_targeting_tutorial.am” generated from the last part.

2. Click the “Create New Workflow” action to create a new workflow with “Hadoop” as a data source and name it “3. Batch scoring”.

3. With the workflow opened, click on the “Data” tab to get to the Data Explorer and find the “public_datasets” folder under the “Hadoop” datasource. There is a file called “Telco_ScoringData”.

4. Drag the file onto the canvas, right click on it, confirm its Hadoop File Structure, then Step Run it.

Page 34 of 39

Page 35: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

5. Inspect the content of the data in the Results Console. The column names should look familiar, except for the “HasBought” column. This dataset does not have the “HasBought” column. You will use the model to predict the value (Yes/No) for this column. Essentially, the model will predict if a customer will buy SmartPhone J7 when given an offer.

Page 35 of 39

Page 36: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

6. Drag and Drop a Load Model operator and configure it to load the “SmartPhoneJ7_targeting.am” model in your workspace.

7. Drag and Drop a Predictor operator and connect an arrow each from both the “Load Model” operator and the “scoring_data” dataset.

8. Click on “Run” in the upper toolbar to run the workflow. Inspect the results in the Results Console. The results of the Predictor operator should have a new column, PRED_LOR. This is the prediction from the model. (You need to scroll to the far right to see it.) You should also find another new column, CONF_LOR. This is the confidence in the predicted value.

9. Save the workflow with a comment and close it.

Page 36 of 39

Page 37: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

At the end of this step, your workflow should resemble the following diagram.

You have created a workflow to score new customers. Instead of opening the workflow and run it manually each time new customer records are received, you will automate the scoring to run on a schedule. In Team Studio, you can group a series of related tasks and schedule them to run as Jobs. These tasks may include running workflows, running a Python Notebook or running a SQL File.

10. Go to the “Jobs” tab of your workspace and click on the “Create a Job” action:

a. Name the new job “Batch Scoring” .

b. Make it run on a schedule, once a month (“Run Every 1 Month”).

c. Disable it one month from now.

d. Notify entire workspace on success and nobody on failure.

Page 37 of 39

Page 38: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

11. You have just created a job. You will now add the batch scoring workflow as a task within the job. Be sure that you are within the “Batch Scoring” job. Click “Add Task” -> “Run Workflow” and add “3. Batch scoring” workflow to the job.

Page 38 of 39

Page 39: Tutorial Playbook Exercise Instructions...4. Select the “Overview” tab. Click “Add/Edit tags” in the list of actions. Tag it as “Tutorial” and “Marketing campaign”

12. The workflow will now be run on a schedule as part of the job. You may nonetheless click on “Run Now” at any time to run the job. Give that a try. Pay attention to the notification icon on the top right, next to your username, to know when the job has finished running.

13. Return to the Team Studio Home Page by clicking on the TIBCO logo on the top left.

Page 39 of 39