a framework for a task-oriented user interaction with...

181
A Framework for a Task-Oriented User Interaction with Smart Environments Using Mobile Devices Submitted by Chuong Cong Vo B.Sc., M.Sc. (Computer Science) A thesis submitted in total fulfilment of the requirements for the degree of Doctor of Philosophy School of Engineering and Mathematical Sciences Faculty of Science, Technology and Engineering La Trobe University Victoria 3086 Australia September 2013

Upload: phungnhi

Post on 21-May-2018

238 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

A Framework for a Task-Oriented User

Interaction with Smart Environments Using

Mobile Devices

Submitted by

Chuong Cong Vo

B.Sc., M.Sc. (Computer Science)

A thesis submitted in total fulfilment

of the requirements for the degree of

Doctor of Philosophy

School of Engineering and Mathematical Sciences

Faculty of Science, Technology and Engineering

La Trobe University

Victoria 3086

Australia

September 2013

Page 2: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Contents

List of Tables x

List of Figures xiii

Abstract xiv

Statement of Authorship xvi

Acknowledgements xvii

1 Introduction 1

1.1 The Usability Problem of Smart Environments . . . . . . . . . . . . 2

1.2 Research Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.3 Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.4 Thesis Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.5 Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.6 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.7 Thesis Organisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2 Background and Related Work 14

2.1 Smart Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.1.1 Overview of Smart Environments . . . . . . . . . . . . . . . 15

ii

Page 3: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

2.1.2 Computing Models of Smart Environments . . . . . . . . . . 16

2.2 Universal Remote Systems . . . . . . . . . . . . . . . . . . . . . . . . 17

2.2.1 Point & Control Metaphor . . . . . . . . . . . . . . . . . . . . 17

2.2.2 Spoken Dialogue Systems . . . . . . . . . . . . . . . . . . . . 18

2.2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3 Task Guidance Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.3.1 PEAT, ISAAC, Jogger, and ICue . . . . . . . . . . . . . . . . . 21

2.3.2 DiamondHelp . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.3.3 Roadie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.4 Task-Driven Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.4.1 Task-Driven Systems in Software Industry . . . . . . . . . . 24

2.4.2 Aura . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.4.3 TCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.4.4 InterPlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.4.5 Olympus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.4.6 Huddle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.4.7 ANSI/CEA-2018 . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3 Research Problem and Proposal 31

3.1 Usability Issues of Smart Environments . . . . . . . . . . . . . . . . 31

3.1.1 Many Devices in Smart Environments . . . . . . . . . . . . . 34

3.1.2 Many Functions of a Device . . . . . . . . . . . . . . . . . . . 35

3.1.3 Multiple-Device Tasks . . . . . . . . . . . . . . . . . . . . . . 36

3.1.4 Replication, Composition and Customisation of Tasks . . . . 37

3.2 Overview of Proposed Solution . . . . . . . . . . . . . . . . . . . . . 38

iii

Page 4: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

3.2.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.2.2 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.3 Research Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.4 Evaluation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.5 Implementation Approach . . . . . . . . . . . . . . . . . . . . . . . . 41

4 The TASKCOM Framework 42

4.1 Overview of the TASKCOM Framework . . . . . . . . . . . . . . . . 43

4.2 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.2.1 Task Computing Framework . . . . . . . . . . . . . . . . . . 44

4.2.2 A Model of Smart Environments . . . . . . . . . . . . . . . . 44

4.2.3 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.2.4 Task Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.2.5 Task-Based User Interfaces . . . . . . . . . . . . . . . . . . . 47

4.2.6 Context-Aware Task Suggestion . . . . . . . . . . . . . . . . 47

4.2.7 Pointing & Tasking Metaphor . . . . . . . . . . . . . . . . . . 48

4.2.8 Relations Between Concepts . . . . . . . . . . . . . . . . . . 48

4.3 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.3.1 An End-User’s Scenarios . . . . . . . . . . . . . . . . . . . . 49

4.3.2 A Developer’s Scenarios . . . . . . . . . . . . . . . . . . . . . 53

4.4 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

4.4.1 TASKUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.4.2 TASKOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.4.3 User Profile Manager . . . . . . . . . . . . . . . . . . . . . . . 57

4.4.4 Task Execution Engine . . . . . . . . . . . . . . . . . . . . . . 57

4.4.5 Task Repository Manager . . . . . . . . . . . . . . . . . . . . 57

4.4.6 Task Suggestion Engine . . . . . . . . . . . . . . . . . . . . . 58

iv

Page 5: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

4.5 Task Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.5.1 Task Composition and Decomposition . . . . . . . . . . . . . 59

4.5.2 Task Refinement . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.6 Task Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.6.1 Model of Task Execution Engine . . . . . . . . . . . . . . . . 61

4.6.2 User Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.6.3 Model of Task Instances . . . . . . . . . . . . . . . . . . . . . 62

4.6.4 Lifecycle of Task Instances . . . . . . . . . . . . . . . . . . . . 63

4.6.5 Resuming Tasks on Different Mobile Devices . . . . . . . . . 66

4.6.6 Collaboration Between Distributed Users . . . . . . . . . . . 66

5 Implementation 67

5.1 Task Modelling And Specification . . . . . . . . . . . . . . . . . . . . 67

5.1.1 Task Ontology . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.1.2 Task Specification Language . . . . . . . . . . . . . . . . . . 74

5.2 Task Guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.2.1 User Stories . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.2.2 Rendering Task Guidance on TASKUI’s User Interface . . . 80

5.3 Distributed Collaboration . . . . . . . . . . . . . . . . . . . . . . . . 83

5.3.1 User Story . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

5.3.2 Communication for Sharing a Task . . . . . . . . . . . . . . . 84

5.4 Context-aware Task Suggestion . . . . . . . . . . . . . . . . . . . . . 85

5.4.1 Place-Based Tasks . . . . . . . . . . . . . . . . . . . . . . . . . 86

5.4.2 Hierarchical Model of Smart Environments . . . . . . . . . . 88

5.4.3 Task Suggestion Algorithm . . . . . . . . . . . . . . . . . . . 89

5.4.4 Put Users in Control . . . . . . . . . . . . . . . . . . . . . . . 90

5.4.5 An Implementation of Task Suggestion . . . . . . . . . . . . 91

v

Page 6: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

5.5 Capturing Users’ Intended Tasks . . . . . . . . . . . . . . . . . . . . 93

5.5.1 Keyword Search for Supported Tasks . . . . . . . . . . . . . 94

5.6 Task Repository Management . . . . . . . . . . . . . . . . . . . . . . 95

5.7 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.7.1 Why Centralised Model of TASKCOM? . . . . . . . . . . . . 96

5.7.2 Technologies Used . . . . . . . . . . . . . . . . . . . . . . . . 98

6 Evaluation and Discussion 99

6.1 Comparison With Existing Approaches . . . . . . . . . . . . . . . . 100

6.1.1 Compared Subjects . . . . . . . . . . . . . . . . . . . . . . . . 100

6.1.2 Compared Aspects/Features . . . . . . . . . . . . . . . . . . 101

6.1.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

6.2 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6.3 An Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

6.3.1 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 108

6.3.2 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

6.3.3 Experimental Tasks . . . . . . . . . . . . . . . . . . . . . . . . 110

6.3.4 Post-Test Questionnaire . . . . . . . . . . . . . . . . . . . . . 112

6.3.5 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

6.3.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

6.4 Evaluation of Task Suggestion Mechanism . . . . . . . . . . . . . . 119

6.4.1 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

6.4.2 Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

6.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

6.5 Applications of TASKCOM in Teaching Activities . . . . . . . . . . 129

6.6 Applications of TASKCOM in AAL . . . . . . . . . . . . . . . . . . . 130

vi

Page 7: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

7 Conclusion and Research Directions 131

A Questionnaire 1: Participants’ Opinions about TASKUI 135

B Questionnaire 2: Participants Suggesting Tasks 137

C Questionnaire 3: Participants Re-ordering Suggested Tasks 139

D Experiment Data 141

D.1 Raw Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

D.2 Synthetic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

D.2.1 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

D.2.2 Clicks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

D.2.3 Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

D.2.4 Helps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

E Questionnaire Data: Participants’ Responses to Questionnaire 1 150

F Questionnaire Data: Participants’ Responses to Questionnaire 3 152

Bibliography 164

vii

Page 8: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

List of Tables

5.1 Examples of task refinement. . . . . . . . . . . . . . . . . . . . . . . 78

5.2 Different suggestions of tasks in different contexts. . . . . . . . . . . 90

6.1 Comparison between the existing frameworks and TASKCOM. . . 104

6.2 Comparing user’s activities with and without TASKCOM. . . . . . 106

6.3 Experimental tasks for the user study. . . . . . . . . . . . . . . . . . 111

6.4 The participants’ performance of the tasks. . . . . . . . . . . . . . . 114

6.5 Time (in seconds) spent to accomplish the tasks. . . . . . . . . . . . 115

6.6 Number of errors made during accomplishing the tasks. . . . . . . 116

6.7 Number of helps taken during accomplishing the tasks. . . . . . . . 116

6.8 The participants’ responses to the usability questionnaire. . . . . . 117

6.9 The scenarios used in the user survey. . . . . . . . . . . . . . . . . . 120

6.10 The tasks suggested by the system for each of the scenarios. . . . . 122

6.11 Tasks suggested by participants for Scenario #1 . . . . . . . . . . . . 123

6.12 Tasks suggested by participants for Scenario #2 . . . . . . . . . . . . 124

6.13 Tasks suggested by participants for Scenario #3 . . . . . . . . . . . . 124

6.14 Tasks suggested by participants for Scenario #4 . . . . . . . . . . . . 125

6.15 Tasks suggested by participants for Scenario #5 . . . . . . . . . . . . 125

6.16 Participants’s re-ordering of the tasks. . . . . . . . . . . . . . . . . . 127

viii

Page 9: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

6.17 Matching TASKUI’s and participants’ task suggestions. . . . . . . . 129

D.1 Participant #1’s performance with TASKUI. . . . . . . . . . . . . . . 141

D.2 Participant #1’s performance without TASKUI. . . . . . . . . . . . . 141

D.3 Participant #2’s performance with TASKUI. . . . . . . . . . . . . . . 141

D.4 Participant #2’s performance without TASKUI. . . . . . . . . . . . . 142

D.5 Participant #3’s performance with TASKUI. . . . . . . . . . . . . . . 142

D.6 Participant #3’s performance without TASKUI. . . . . . . . . . . . . 142

D.7 Participant #4’s performance with TASKUI. . . . . . . . . . . . . . . 142

D.8 Participant #4’s performance without TASKUI. . . . . . . . . . . . . 143

D.9 Participant #5’s performance with TASKUI. . . . . . . . . . . . . . . 143

D.10 Participant #5’s performance without TASKUI. . . . . . . . . . . . . 143

D.11 Participant #6’s performance with TASKUI. . . . . . . . . . . . . . . 143

D.12 Participant #6’s performance without TASKUI. . . . . . . . . . . . . 144

D.13 Participant #7’s performance with TASKUI. . . . . . . . . . . . . . . 144

D.14 Participant #7’s performance without TASKUI. . . . . . . . . . . . . 144

D.15 Participant #8’s performance with TASKUI. . . . . . . . . . . . . . . 145

D.16 Participant #8’s performance without TASKUI. . . . . . . . . . . . . 145

D.17 Participant #9’s performance with TASKUI. . . . . . . . . . . . . . . 145

D.18 Participant #9’s performance without TASKUI. . . . . . . . . . . . . 145

D.19 Participant #10’s performance with TASKUI. . . . . . . . . . . . . . 146

D.20 Participant #10’s performance without TASKUI. . . . . . . . . . . . 146

D.21 Time (in seconds) spent to complete Task 1. . . . . . . . . . . . . . . 146

D.22 Time (in seconds) spent to complete Task 2. . . . . . . . . . . . . . . 146

D.23 Time (seconds) spent to complete Task 3. . . . . . . . . . . . . . . . 146

D.24 Time (seconds) spent to complete Task 4. . . . . . . . . . . . . . . . 147

D.25 Number of clicks taken to complete Task 1. . . . . . . . . . . . . . . 147

ix

Page 10: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

D.26 Number of clicks taken to complete Task 2. . . . . . . . . . . . . . . 147

D.27 Number of clicks taken to complete Task 3. . . . . . . . . . . . . . . 147

D.28 Number of clicks taken to complete Task 4. . . . . . . . . . . . . . . 147

D.29 Number of errors made to complete Task 1. . . . . . . . . . . . . . . 148

D.30 Number of errors made to complete Task 2. . . . . . . . . . . . . . . 148

D.31 Number of errors made to complete Task 3. . . . . . . . . . . . . . . 148

D.32 Number of errors made to complete Task 4. . . . . . . . . . . . . . . 148

D.33 Number of helps taken to complete Task 1. . . . . . . . . . . . . . . 148

D.34 Number of helps taken to complete Task 2. . . . . . . . . . . . . . . 148

D.35 Number of helps taken to complete Task 3. . . . . . . . . . . . . . . 149

D.36 Number of helps taken to complete Task 4. . . . . . . . . . . . . . . 149

E.1 Participants’s experience without TASKUI. . . . . . . . . . . . . . . 150

E.2 Participants’s experience with TASKUI. . . . . . . . . . . . . . . . . 150

F.1 Scenario #1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

F.2 Scenario #2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

F.3 Scenario #3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

F.4 Scenario #4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

F.5 Scenario #5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

x

Page 11: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

List of Figures

1.1 Application-centric interaction paradigm. . . . . . . . . . . . . . . . 3

1.2 Task-centric interaction paradigm. . . . . . . . . . . . . . . . . . . . 7

3.1 Examples of function-oriented user interfaces. . . . . . . . . . . . . 32

3.2 Example screens of application icons on mobile devices. . . . . . . . 35

4.1 Deployment architecture of TASKOS. . . . . . . . . . . . . . . . . . . 43

4.2 Relations between entities in TASKCOM. . . . . . . . . . . . . . . . 48

4.3 Conceptual component and deployment architecture. . . . . . . . . 55

4.4 A graphical model of a “make coffee” task. . . . . . . . . . . . . . . 60

4.5 Main components and data model of the task execution engine. . . 61

4.6 The simplified user model in TEE. . . . . . . . . . . . . . . . . . . . 62

4.7 The simplified data model of a task instance in TEE. . . . . . . . . . 62

4.8 The lifecycle of a task instance . . . . . . . . . . . . . . . . . . . . . . 63

5.1 Overview of the proposed task ontology. . . . . . . . . . . . . . . . 68

5.2 Syntax diagram for specifying a task. . . . . . . . . . . . . . . . . . . 69

5.3 Syntax diagram for specifying a service. . . . . . . . . . . . . . . . . 70

5.4 Syntax diagram for specifying a condition. . . . . . . . . . . . . . . 70

5.5 Syntax diagram for specifying Then and Else elements. . . . . . . . 70

5.6 Ontology of User Interface. . . . . . . . . . . . . . . . . . . . . . . . 71

xi

Page 12: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

5.7 Syntax diagram of a user interface for a task. . . . . . . . . . . . . . 72

5.8 Syntax diagram for specifying a map. . . . . . . . . . . . . . . . . . 72

5.9 Syntax diagram for specifying a centre or a POI on a map. . . . . . 72

5.10 Syntax diagram of a Select element. . . . . . . . . . . . . . . . . . . . 73

5.11 Syntax diagram for specifying a Listener. . . . . . . . . . . . . . . . 73

5.12 Syntax diagram of a list. . . . . . . . . . . . . . . . . . . . . . . . . . 74

5.13 Syntax diagram of a Slider. . . . . . . . . . . . . . . . . . . . . . . . . 74

5.14 Syntax diagram of an input. . . . . . . . . . . . . . . . . . . . . . . . 74

5.15 The schema of TASKCOM’s language. . . . . . . . . . . . . . . . . . 75

5.16 A specification of an “adjust TV volume” task . . . . . . . . . . . . 76

5.17 A specification for a “make coffee” task . . . . . . . . . . . . . . . . 77

5.18 A specification of an adjusting room’s brightness task . . . . . . . . 79

5.19 Task guidance for borrowing a book from a library. . . . . . . . . . 81

5.20 Task guidance for making cappuccino. . . . . . . . . . . . . . . . . . 81

5.21 An example of a message containing a task’s user interface. . . . . . 82

5.22 An example of collaborating a task by two users. . . . . . . . . . . . 84

5.23 Sequence diagram: Request for task collaboration. . . . . . . . . . . 85

5.24 Examples of associations between tasks and spaces. . . . . . . . . . 86

5.25 Examples of space-based task suggestions. . . . . . . . . . . . . . . 87

5.26 Pointing & Tasking metaphor. . . . . . . . . . . . . . . . . . . . . . . 87

5.27 An example of task and space associations. . . . . . . . . . . . . . . 89

5.28 Settings for automatic task suggestion. . . . . . . . . . . . . . . . . . 91

5.29 Experiment of the pointing metaphor using the Cricket system. . . 93

5.30 Using keyword search to find supported tasks. . . . . . . . . . . . . 95

5.31 Management of the task repository. . . . . . . . . . . . . . . . . . . . 96

5.32 Adding a new task to the task repository. . . . . . . . . . . . . . . . 97

xii

Page 13: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

5.33 Validating a task specification. . . . . . . . . . . . . . . . . . . . . . . 97

6.1 The participants’s performance with and without TASKUI. . . . . . 115

6.2 The average time spent for each of the tasks. . . . . . . . . . . . . . 115

6.3 Number of errors made with and without TASKUI. . . . . . . . . . 116

6.4 Number of helps taken with and without TASKUI. . . . . . . . . . . 117

6.5 The participants’ responses to the usability questionnaire. . . . . . 118

6.6 Differences between TASKUI’s and the participants’ task orderings. 128

6.7 Matching TASKUI’s and the participants’ task suggestions. . . . . . 130

xiii

Page 14: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Abstract

Research on Smart spaces [1] (or smart environments) is an interesting and active re-

search topic under the pervasive computing umbrella [2]. Smart environments are

physical environments (e.g., homes, offices, cars, seminar rooms, shopping centres)

which are richly and invisibly interwoven with tiny computers, sensors, actuators,

computer-controllable devices, and pervasive software services. More and more

everyday devices (e.g., washing machines, televisions, air-conditioners, heaters,

printers, lights, coffee makers, and window drapes) can be controlled remotely.

A traditional paradigm for users to interact with a smart environment (i.e., con-

trolling devices) is to use software applications (called controller applications).

This interaction paradigm is called the application-centric paradigm. There are

limitations of this paradigm:

• Users need to install the right controller applications on their computers.

• The user interfaces are not directed towards tasks users want accomplished

(e.g., turn off the light). The user interfaces are instead organised around

functions of controlled devices. Such the function-based user interfaces re-

quire users to understand devices’ functions and to coordinate them appro-

priately to accomplish their tasks.

• It cannot directly answer users’ questions such as what tasks are possible in

a particular smart environment; and how to achieve a task using available

devices and services in a particular smart environment.

xiv

Page 15: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

• It does not allow a user to move an ongoing task from across computers.

• It does not allow distributed users to collaborate on the same task.

In this thesis, we further discuss the limitations aforementioned. As a solu-

tion to address these limitations, we investigate applying of the task-centric in-

teraction paradigm in smart environments. With the task-centric paradigm, users

interact with smart environments in terms of tasks, instead of individual applica-

tions and functions of devices. We design a framework (called TASKCOM) which

supports development and deployment of the task-centric paradigm in smart en-

vironments. The framework consists of a task-driven development methodology,

a system architecture, a tool, and a specification language. The specification lan-

guage is used to specify task models in XML documents which can be validated,

interpreted, and executed by the system.

We use TASKCOM to develop a prototype system (we called it a task-oriented

system–TASKOS) which is our implementation of the task-centric paradigm, and

which is used in our experiments to evaluate the usefulness of the task-centric

paradigm against the application-centric paradigm in smart environments. In ad-

dition to addressing the limitations of the application-centric paradigm, TASKOS

also provides users task guidance and context-aware task suggestions. Task guid-

ance is step-by-step instructions for accomplishing a task. It is generated based

on the XML specification of the task. The context-aware task suggestion mecha-

nism allows users to discover supported tasks and quickly issue a task they want

accomplished.

The experiment and questionnaire based evaluation shows that the partici-

pants were willing to use our system. They commented positively about their

experience with the system. The measurements of the participants’s task accom-

plishment shown that the participants accomplished tasks more effectively and

efficiently when using our task-oriented system.

xv

Page 16: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Statement of Authorship

Except where reference is made in the text of the thesis, this thesis contains

no material published elsewhere or extracted in whole or in part from a thesis

submitted for the award of any other degree or diploma.

No other person’s work has been used without due acknowledgement in the

main text of the thesis.

The thesis has not been submitted for the award of any degree of diploma in

any other tertiary institution.

Signature: Date:

xvi

Page 17: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Acknowledgements

I would like to thank my supervisors Dr. Torab Torabi and Dr. Seng W. Loke for your

time, support, wisdom, and guidance. Your patience has allowed me to pursue

my research interests and your comments have helped me to seek the big picture

in my research and to identify implications for my findings.

I am fortunate to have many friends and colleagues supporting me, thank you.

To faculty and staff at the School of Engineering and Mathematical Sciences, the

Department of Computer Science and Computer Engineering, the Research and

Graduate Studies Office, the Higher Degrees Research Committee, and the Aca-

demic Language and Learning Unit for your time and help, thank you.

To my parents for your immeasurable sacrifices and selfless love, for nurturing me

in my formative years, and your continued support, thank you.

To my love Dzung Nguyen and my son Khuong Vo for encouraging me, nudging

me through my work, and supporting me through many moments. Thank you for

always being here for me.

xvii

Page 18: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Chapter 1Introduction

Pervasive computing [2] is to seamlessly integrate computers into everyday settings

to support people in their everyday tasks. More and more everyday physical ob-

jects are computerised. They have networking and computational capabilities.

They can be controlled remotely via software applications (called controller ap-

plications) running on mobile computers (e.g., smartphones). For example, sev-

eral contemporary models of SAMSUNG’s air-conditioners can be controlled re-

motely via the Smart Air-Conditioner mobile application1. Several models of LG’s

washers and dryers can be controlled via the Smart Laundry & DW mobile ap-

plications2. Even a single bulb such as Bluetooth Bulb3 or a milk jug4 can also be

controlled using mobile applications.

Smart environments [1] are physical environments (e.g., homes, offices, cars, sem-

inar rooms, shopping centres) which are networks of tiny computers, sensors, ac-

tuators, and computerised devices. For example, a today seminar room can have

a network of computer-controlled devices such as a television, an air-conditioner,

a music player, a printer, a projector, lights, and windows. Such a smart semi-

nar room allows a user to accomplish many tasks such as present slideshows, play

movies, print documents, and so on. Another example of a smart environment is a

smartphone that a user carries with him/her. We see a smartphone as the owner’s

1http://www.samsungapps.com/topApps/topAppsDetail.as?productId=G000053212682https://play.google.com/store/apps/details?id=com.lg.apps.lglaundry3http://www.bluebulb.com/4http://www.teehanlax.com/labs/do-we-have-milk/

1

Page 19: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

personal smart environment. A today smartphone has powerful networking and

processing capabilities. It often has a rich set of output/input components and

sensors (e.g., accelerometer, light sensor, proximity sensor, compass, WiFi, Blue-

tooth, NFC, GPS receiver, camera, microphone, and gyroscope). There are hun-

dreds of thousands of mobile applications which enable an uncountable number

of tasks a user can accomplish on a smartphone. Similarly, a today printer is not

just used for usual printing, it can email, photocopy, scan, make a call, fax, and so

forth.

Adding more functions into a device allows users to accomplish more tasks.

While most people desire devices with more functions, at the same time they also

want the devices to be easy to use [3]. Paradoxically, the more functions a device

provides, the more complicated the device is to use. For this reason, there is a

challenge in creating smart environments (i.e., a network of devices) which are

more capable and at the same time more usable [4].

In the remaining of this chapter, we first describe the research problem which

we will address in this thesis. We then propose our approach to the problem. The

subsequent sections describe the research aim, the methodology, the scope, the

contributions, and the structure of the thesis.

1.1 The Usability Problem of Smart Environments

There currently is a usability crisis in computer-controlled electronic products [5,

6]. A study [7] reports that half of all reportedly malfunctioning consumer elec-

tronics devices returned to stores are in full working order; customers just couldn’t

figure out how to operate them. In another study [8], some participants saw smart

technology as something excessive, sometimes useless and invasive while others

only used them to perform basic tasks. A study [9] on usability of smart confer-

ence rooms found that the users tended to rely on experts (wizards) for operating

the room. A study [10] conducted home visits to 14 households which had home

automation systems installed and shown that the issue of complex user interfaces

2

Page 20: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

was one of barriers to broader adoption of home automation systems. One of the

participant mentioned that “My brother spent hours trying to show me how to use

it, and I still don’t know how to use it.” Others noted people’s fear “I started ex-

plaining the panel to them (the fire department) and they looked in dread. People

just don’t want to touch it. And my mother sat in our house in the dark, because

she was scared to touch any of the controls.”

A traditional paradigm for users to interact with a smart environment (i.e.,

controlling a network of computer-controllable devices) is to use software appli-

cations (called controller applications). This interaction paradigm is called the

application-centric paradigm. Figure 1.1 presents how a user interacts with a net-

work of devices via applications. With this paradigm, to control a device, the user

must have a delegated controller application installed on his/her computer (e.g., a

smartphone), thus to control a smart environment (i.e., a network of devices), the

user must install a number of applications on his/her computer.

User

Application1 Application2 ... Applicationn

Functionsof device1

Functionsof device2

...Functionsof devicen

Figure 1.1: Application-centric interaction paradigm.

There are limitations of this paradigm:

• It requires users to install and maintain controller applications on their com-

puters.

• Controller applications’ user interfaces are not directed towards tasks users

want accomplished (e.g., watch movie), they are instead organised around

3

Page 21: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

functions of controlled devices (e.g., “on”, “off”, “play”, and “record”). With

function-based user interfaces [11], users interact with smart environments

in terms of functions of individual devices. To accomplish a task, a user must

select, parameterise, and then execute functions in a correct order. When

these functions are executed, they cause an effect (e.g., a movie is played).

Generally, users are interested in the effect, not the functions they need to

execute to get the desired effect.

• It does not allow collaboration of functions provided by different devices.

A smart television can display photos but it does not know how to down-

load (e.g., via Bluetooth or WiFi) the photos stored on a smartphone whereas

a smartphone can capture photos but it does not know how to send those

photos to a smart television.

• One of the fundamental principles of interaction design is discoverability

which allows the user to build up a model of what task he/she can do with

a system. The application-centric paradigm and its function-based user in-

terfaces fail to meet this principle. Indeed, it cannot help users to answer

questions such as what tasks are possible in a local smart environment and

how to achieve a task using devices and services in a local smart environ-

ment.

• Because the application-centric paradigm ties users’ tasks and applications, it

essentially does not allow a user to move an ongoing task across computers.

• It does not allow distributed users to collaborate on the same task.

When a smart environment is increasingly embedded with computerised de-

vices and services, each of which is increasingly added with more functions, the

range of available functions will explode. The overload/plethora of functions

would overwhelm users, hindering them from recognising and performing the

right order of functions for the right tasks. For example, a user wants to send a

photo from his/her smartphone to a nearby friend’s smartphone5. He/she may

5We see a smartphone itself as a personal smart environment.

4

Page 22: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

ask him/herself several questions such as how can I achieve this task? can I send the

photo via NFC or Bluetooth? If he/she decides to use Bluetooth, then he/she would

ask how should I start? Turn on Bluetooth first or select the photo first?, and so on. Such

a situation often happens in smart environments as well.

The function-oriented user interface of a device tends to be more complicated

when the device is added with an overload of functions [5, 6]. They have more

buttons, options, modes, and menus. For example, a typical multipurpose pho-

tocopier (which itself can be seen as a single-device smart environment in our

vision), the Fuji Xerox ApeosPort II 4000, has 13 buttons, two dials, one keyboard,

2 LED lights, an LCD screen with 3 choices and 4 options each of which leads

to many other screens. Such a user interface provides no easy way to help the

user perform the basic operations on the device. In many cases, users have trouble

understanding what tasks are supported or how to associate the desired logical ac-

tions with physical functions [12]. Consequently, to exploit a smart environment,

users must (1) understand the meanings of functions provided by the environ-

ment in order to issue feasible tasks; and (2) map their high-level goals of tasks to

low-level operational vocabularies of the functions. These requirements may be

beyond ordinary users when the complexity, the diversity, and the sheer number

of devices and services (as well as different combinations of ways they can work

together) continually increases.

In recognition of this problem, the research community has proposed several

approaches which aim to ease the users in the use of smart environments for ac-

complishing their tasks. One of the approaches is context-ware systems [13, 14]

which can infer the user’s context and behaves accordingly by using increasingly

complex mechanisms (e.g., by using big rule-sets or machine learning). For ex-

ample, a mobile application can recognise that the user is in a formal meeting,

it switches the phone to a silent mode. Similarly, some systems [15, 16] try to

recognise or predict a user’s current activity or task and then allocate resources

and functions necessary for carrying out that activity or task. Also based on ma-

chine learning techniques, some system allow the end-users to “program” a smart

5

Page 23: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

environment by demonstrations. For example, Alfred [17] and CAPpella [18] al-

low the end-user specifying the name of a new task, then demonstrating one or

more sequences of actions for achieving that task, and finally telling the system

the conditions under which it would prefer one plan to another. Once trained, the

systems will then continuously monitor conditions in order to enact the demon-

strated tasks. These systems often heavily rely on the capability of context recog-

nition (e.g., the recognition of location and user activity). Because techniques for

context recognition often require a training phase and streaming data which are

very domain-specific, such techniques cannot be deployed widely and generally

in everyday settings. Moreover, these systems need time to learn the activity pat-

terns, they can only support common and routine activities and tasks.

Another approach which also aims to address the complexity of controlling

smart environments is to provide a universal remote user interface for controlling

multiple devices. The examples are UbiControl [19], D’Homme [20], UIA [21], and

PUC [22]. To function, these systems need specifications of controlled devices and

special pieces of hardware which are attached to the controlling devices and the

controlled devices. A common limitation of the existing universal remote controls

is that the user interfaces are device-specific which do not support well for accom-

plishing tasks can span across multiple devices.

1.2 Research Approach

Our research approach is inspired by the concept of task computing or task-driven

computing [23–25] which aims to reduce the complexity of using technologies by

shifting users’ attentions to what they want to do rather than on the specific tools

or functions for doing those tasks. We propose to apply the task-centric paradigm

for user interaction with smart environments. The research aim is to address the

limitations of the application-centric paradigm in smart environments.

With the task-centric paradigm, users interact with smart environments in terms

of tasks, instead of individual applications (see Figure 1.2 and 1.1). The fundamen-

6

Page 24: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

tal hypothesis of our approach is that an accomplishment of a task of controlling devices

in a smart environment is a collaboration of the controlled devices’ functions in an appro-

priate sequence.

User

Task1 Task2 ... Taskn

Functionsof device1

Functionsof device2

...Functionsof devicen

Figure 1.2: Task-centric interaction paradigm.

To realise the task-centric interaction paradigm in smart environments, our re-

search plan is to design a framework (we call it TASKCOM) which allows us to

develop and deploy the task-centric paradigm in smart environments. The frame-

work consists of a task-driven development methodology, a system architecture,

a tool, and a specification language. The specification language is used to specify

task models in XML documents which can be validated, interpreted, and executed

by machines. A task is specified as a composition of sub-tasks, services, man-

ual actions, and instructions. Sub-tasks and services are finally bound to devices’

functions. Task specifications can be reused, combined, and customised.

We use TASKCOM to develop a prototype system (we called it a task-oriented

system–TASKOS) which is an implementation of the task-centric paradigm, and

which is used in our experiments to evaluate the usefulness of the task-centric

paradigm against the application-centric paradigm in smart environments. In ad-

dition to addressing the limitations of the application-centric paradigm, TASKOS

also provides users task guidance and context-aware task suggestions. Task guid-

ance is step-by-step instructions for accomplishing a task. It is generated based on

the XML specification of a task model. The context-aware task suggestion mecha-

7

Page 25: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

nism allows users to discover supported tasks and quickly issue a task they want

accomplished.

We investigate and develop a task-oriented user interface (also called a task-

based user interface–TASKUI) which is oriented around tasks, rather than func-

tions of individual devices. TASKUI is deployed on intermediary handheld com-

puters (e.g., smartphones and tablets). It can connect to multiple TASKOS to pro-

vide users an one-stop access point to devices and services in multiple smart envi-

ronments. TASKUI acts as a bridge that links user tasks and devices’ functions in

smart environments and that shields the users from variations in the availability

of devices and services.

Using TASKUI, the user starts a task by telling the system an abstract goal of the

task, e.g., “Watch movie”. The system checks if the task is possible in the current

smart environment. If the task is possible, the system will walk the user through

the steps to accomplish the task. Specifically, to design and develop TASKOS, we

research and investigate the following issues.

Task guidance. The system provides the user with step-by-step task guidance

which tells the user the steps and instructions to accomplish the task. The

steps are finally either linked to functions of a smart environment (i.e., for

system steps) or manual actions by the user (i.e., for manual steps).

Multiple-device tasks. Users can seamlessly accomplish tasks which can span

different devices.

Distributed collaboration. TASKOS allows multiple distributed users to collabo-

rate on the same task. During the execution of a task, the user can invite

other distributed users to remotely accomplish some of the sub-tasks or to

remotely observe the accomplishment of the task done by the others.

Task suspension and resumption. The user can suspend a unfinished task and

resume it later, maybe on another mediate computer.

Multi-tasking. The system allows the user to handle more than one task at the

8

Page 26: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

same time (e.g., interleaving the execution of multiple tasks). When the user

switch to an unfinished task, the system restores the task to the current step

that the user’s left before.

Context-aware task suggestion. TASKOS can automatically suggest the user with

available tasks which are supported by a particular smart environment.

The ultimate aim of this research is to fill the gap between a smart environ-

ment’s functions and the tasks that users want accomplished. Our vision is to

make it easy for users with less training accomplishing their tasks using devices’

functions in a smart environment, especially casual users in unfamiliar smart en-

vironments. We investigate providing a task-oriented system, aiming to improve-

ment of users’ performance of tasks in smart environments.

1.3 Research Methodology

In this thesis, we first propose a conceptual design of the framework that allows

us to develop and deploy TASKOS systems for smart environments. Next, we

implement the framework and develop a prototype TASKOS system which can

support a number of tasks in simulated smart environments. The aim of the pro-

totype is twofold: to demonstrate the feasibility and versatility of our framework

and to be used in our experiment to evaluate the framework. Finally, we evaluate

our approach by comparing our framework with existing ones, and conducting an

experiment with a user survey after the experiment.

Analysis of problems and requirements We analyse the problems of the current

application-centric paradigm for interaction with smart environments. We

analyse the needs of end-users and developers and identify requirements

should be meet in order to (1) aid the end-users in accomplishing their tasks

in smart environments more effectively and efficiently and (2) let the devel-

opers bring the services to the end-users in a timely manner.

9

Page 27: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Proposal of a solution We propose a solution which aims to address the identified

problems and to meet the requirements above.

Design of the framework Based on the identified requirements, we design an in-

tegrated framework that includes a methodology and a tool for development

and deployment of the solution in smart environment.

Implementation of the framework To evaluate the solution and the framework

itself, we implement a prototype system and deploy it a simulated smart

environment which is used for evaluation.

Evaluation of the framework The evaluation is to measure how good the pro-

posed solution and the framework address the problems and meet the re-

quirements. An overview of the methods used for the evaluation is given in

the next section.

1.4 Thesis Scope

It is important to note that we do not claim that we propose new concepts of task

computing or task-oriented user interfaces. The point of this thesis is rather to in-

vestigate an approach to develop and deploy task-oriented systems for smart en-

vironments. To achieve this, we develop an intergraded development and deploy-

ment framework that includes a methodology, tools, and systems for development

and deployment of task-oriented systems in smart environments. Our framework

is developed based on the model-driven methodology in which task models are

the fundamental elements of the framework. We explore scripting languages for

specifying task models in our framework.

We focus on helping users in accomplishing short-term tasks which involve

controlling devices and invoking services in smart environments, especially in

public, open, and dynamic smart environments where a user may not be familiar

with. We assume that the controlled devices have their functions exposed as ser-

vices (e.g., Web services). Invocation of the services will activate the corresponding

10

Page 28: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

functions of devices. We try to augment a smart environments with possible tasks

that can be achieved by a combination of functions provided by available devices

and services in a smart environment. Note that, a task requires merely services

such as “Book a taxi”.

This thesis neither focuses on migration of task instances across environments

(e.g., [26, 27]), mechanisms of automatic resource allocations for tasks (e.g., [28]),

task-based user interface design (e.g., [29]), user interface markup languages (e.g.,

[30, 31]), service discovery (e.g., [32]), nor service composition (e.g., [33]). How-

ever, these theories can be integrated with our framework for building compre-

hensive task-oriented systems.

1.5 Evaluation Methods

We evaluate our approach from two perspectives: the system development per-

spective and the usability perspective. From the development perspective, we

compare our approach with the existing approaches in terms of features that sup-

port development and deployment of task-oriented systems in smart environ-

ments. From the usability perspective, we analyse advantages of TASKOS and

compare them again the existing approaches.

To evaluate to what extent our system is easy, effective, efficient, satisfied, and

learnable for users to interact with smart environments (i.e., to use operate elec-

tronic devices and pervasive services to accomplish their tasks in smart environ-

ments), we conduct a user experiment. The user experiment allows us to measure

how users perform their tasks using TASKOS. The participants are asked to ac-

complish a number of tasks with and without using TASKUI. The performance of

the subjects is measured using several metrics, including the time to complete a

task, the number of errors made while attempting to complete a task, and how

often external help was required to complete a task.

Moreover, by implementing the prototype system and demonstrating its work-

ing, we can prove the feasibility and the practicability of development, deploy-

11

Page 29: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

ment, and application of the proposed framework.

1.6 Thesis Contributions

This thesis has several contributions:

• We propose a comprehensive framework for development and deployment

of task-oriented systems for smart environments.

• The framework includes a task modelling language for specifying task mod-

els. The language consists of an RNC6 schema which make task models writ-

ten in our language verifiable and executable.

• We provide TASKOS as a prototype system to be deployed into smart envi-

ronments. TASKOS includes the implementations of an interpreter and an

execution engine which allows verify and execute task models which are

written in our task modelling language.

• We also provide TASKUI, a mobile application that coordinates with the

TASKOS in order to provide users the experience of the task-oriented inter-

action paradigm.

• The evaluation of TASKCOM using a comparison, a user experiment with a

post-experiment questionnaire, and a user survey shows that: (1) TASKCOM

provides more features than the existing frameworks; (2) the participants ac-

complished tasks more efficiently and effectively with the help of TASKUI;

(3) the participants found that TASKUI was easy to use, required less knowl-

edge to operate, and meet the participants’ expectations; and finally (4) the

context-aware task suggestion mechanism was significantly effective and ef-

ficient. The tasks suggested by TASKOS meet more than 50% of the tasks ex-

pected by the participants while the order of tasks suggested by the system

matched with about 80% of the task orders suggested by the participants.

6http://relaxng.org/

12

Page 30: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

1.7 Thesis Organisation

Chapter 2 We present the related concepts of smart environments and the existing

models of smart environments. Then we present a review of related work

which includes universal remote systems, task guidance systems, and task-

driven systems. Finally, we outline the limitations of the existing approaches

and the gaps remained to achieving the goals of user-centric smart environ-

ments.

Chapter 3 We elaborate on the usability problems of current smart environments.

We then present an overview of our solution along with plan to design, im-

plement, and evaluate the proposed solution.

Chapter 4 We present the conceptual architecture of the proposed framework. We

define the concepts and components of the framework.

Chapter 5 We present an implementation of the framework and a prototype sys-

tem.

Chapter 6 We present the evaluation of the proposed framework. The evaluation

methods includes an analytical comparison with existing frameworks, an ex-

periment, and a user survey.

Chapter 7 We present the thesis conclusion and the future directions.

Appendixes We also include several appendixes at the end of the thesis. The ap-

pendixes include the design of the questionnaire which was a part of the

evaluation experiment, the experiment data, the design of the evaluation

user survey, and the survey data.

13

Page 31: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Chapter 2Background and Related Work

In this chapter, we begin by briefly introducing the general notion of smart en-

vironments, and followed by the computing models of smart environments. We

then present a review of related frameworks, systems, and techniques which are

claimed to support users’ tasks in smart environments. We classify the reviewed

frameworks and systems into three main categories including universal remote

systems, task guidance systems, and task-driven systems.

Universal remote control systems: To address the problem of too many physical

remote controls in a smart environment, universal remote control systems

have been developed. A universal remote control system can automatically

generate a user interface on a mobile computer (e.g., a smartphone) for con-

trolling various brands of one or more types of consumer electronics devices

(normally one device can be controlled at a time). Some systems can adapt

the user interface to the capability of the hosting mobile computer. Univer-

sal remote systems usually require specifications of controlled devices and

specifications of hosting computers to work. The specifications of controlled

devices are used for generation of user interfaces while the specifications of

hosting computers are used for the adaptation techniques.

Task guidance systems [34]: To address the difficulties users (especially individ-

uals with acquired cognitive impairments) may experience problem solving

and performing the sequential steps required to complete their tasks in their

14

Page 32: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

daily life, task guidance systems have been developed to aid the users in

managing their daily tasks. A task guidance system usually organises a

user’s predefined tasks into a daily schedule and instructs the user in how to

perform the tasks in the schedule. The system can both provide reminders

and interactive task guidance, providing step-by-step instructions as the user

completes his/her a task.

Task-driven systems: A task-driven computing system (or task-oriented system)

aims to let users interact with their computing environments in terms of

high-level tasks (rather than low-level abstractions: applications and indi-

vidual devices) and free them from low-level configuration activities [23].

Using a task-oriented system, a user lets the system know the task he/she

wants accomplished, the system then automatically configures the environ-

ment and achieves the task for the user or guide the user through the steps

of the tasks.

2.1 Smart Environments

In this section, we present an overview of smart environments and existing com-

puting models of smart environments.

2.1.1 Overview of Smart Environments

Pervasive computing (or ubiquitous computing) was probably first introduced by

Mark Weiser [2, 35] in the early 1990s. It is a paradigm of seamlessly integra-

tion of computational capabilities and information with everyday environments.

Such environments are called smart environments, smart spaces [1], or intelligent

spaces [36]. Much research in pervasive computing focuses on finding solutions to

use smart environments more effectively and productively while make the avail-

ability of computers non-intrusive and virtually invisible to the users. Previous

research [26, 37] has identified several challenges to realising smart environments:

15

Page 33: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

• Heterogeneity of device capabilities, platforms, and network protocols.

• Spontaneous mobility of users which can lead to unavailability of devices

and services, computing sessions distributed over a range of devices and

environments.

• Unanticipated (unpredictable) availability of devices, network connections,

and services.

• Human attention is an especially scarce resource, because users are often

preoccupied with walking, driving, or other real-world interactions.

2.1.2 Computing Models of Smart Environments

Usually, research on smart environments first tries to model the environments.

There have been two common types of models: ad-hoc models and infrastructure-

based models. With the ad-hoc models [38, 39], a smart environment is modelled

as a set of mobile computing devices, each of which is seen as a mini-desktop

hosting applications which exploit the environment’ functionalities. This type of

development forces developers to work at a lower level of abstraction by directly

programming devices or networks to control them. In addition, if we consider that

smart environments are characterised by a continuous evolution of hardware and

software, the use of ad-hoc solutions makes maintenance and further adaptation

extremely difficult [40]. It is also difficult to build applications for individual de-

vices that can interconnect and interoperate with each others. This is because ap-

plications in this model are isolated from each others, even they are running on the

same device. With the infrastructure-based models [37, 41], a smart environment

consists of devices, users, software components, and user interfaces. Mobile com-

puting devices act as portals into applications and data which are hosted by the

infrastructure (i.e., the devices are not a repository of custom software managed

by users). Applications are means by which users perform their tasks. The sepa-

ration between software components and user interfaces enables development of

adaptation techniques which can apply at two levels: operation logics and user

16

Page 34: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

interfaces. In this thesis, we adopt this model. However, we focus more on user

tasks and user interaction with smart environments, rather than on the techniques

which support adaptation of software components and user interfaces to the het-

erogeneity of smart environments.

2.2 Universal Remote Systems

Universal remote systems provide users with a universal user interface on a sepa-

rate/mediate/intermediary handheld computer (e.g., a PDA or a smartphone) that

allows the users to control devices (e.g., TVs, printers, and lights) within a smart

environment at a distance.

2.2.1 Point & Control Metaphor

AIDE [42], UIA [21] and PUC [22] can be seen as the first universal remote sys-

tems for controlling devices in a smart environment. To control a device, the user

“points” to that device. The controlled device then transfers to the intermediary

handheld computer a description of its commands which are displayed on the me-

diate device’s screen for the user to select. The selected command is transferred to

the controlled device and the action is carried out at the device. The pointing usu-

ally uses a laser beam attached to the intermediary handheld computer. A laser

beam gives an optical feedback, indicating which device is selected. Pointing in

some systems also means that the user explicitly selects on an interface a device to

control from the list of devices.

UbiControl [19] is a similar system. In UbiControl, the description of each con-

trolled device is an HTML document which are stored on a server. After device

selection through the laser-pointer, the description is downloaded to the interme-

diary handheld computer. The user can then control the selected device in a web

browser like fashion. A similar system called REACHeS [43] also represents de-

vices’s control interfaces as web pages but it instead uses the RFID technology for

17

Page 35: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

selecting devices. The user controls a device by touching an RFID tag with a mo-

bile terminal that runs REACHeS and that is equipped with an RFID reader. The

corresponding device’s control interface is rendered on the mobile terminal as a

web page for the user to control the device.

UNIFORM [44, 45] goes further in this direction by generating a remote control

interface which is tailored for each user to maintain the interface consistency with

that user’s past experience. UNIFORM identifies similarities between different

devices based on their specifications and user-defined preferences. The similarity

information allows UNIFORM to use the same type of user interface controls for

similar functions, so that the functions can be found with the same navigation

steps, and create interfaces that have a similar visual appearance. However, as the

authors noted, this solution is often limited because of the lack of useful semantic

information about unique functions. Moreover, it can take a substantial amount of

time (about 5 hours for a VCR and one week for three different VCRs for even an

expert) to complete a specification for an appliance, on which similarities between

different appliances are based.

2.2.2 Spoken Dialogue Systems

Dialogue systems (e.g., D’Homme [20] and EXACT [46]) offer an easy to use in-

terface to smart environments, making it possible for users to issue simple voice

device-control command. Their natural language interfaces allow users to control

devices with direct verbal orders like “Increase the temperature 5◦” or “Defrost

2 pounds of corn”. However, simple utterances are severely limited for creating

tasks that have complex procedures and generally need interactive dialogue to

seek clarifications to avoid misunderstandings and to provide rich feedback [47].

They also requires the user and the system to share the same vocabulary for talk-

ing about the concrete device commands. To improve the recognition accuracy,

the system tends to constrain the permitted user utterances to a subset of natural

language but places the burden of learning a new language on the user.

18

Page 36: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

There are also several limited forms of spoken dialogue systems in the soft-

ware industry, especially in mobile application industry. These systems aim to

better support mobile users in accomplishing their tasks related to mobile devices.

In particular, Apple’s Siri1, Google’s Google Now2, Samsung’s S Voice3, and Mi-

crosoft’s Tellme4 are initial attempts which allow the users to tell the mobile de-

vices several tasks they want to accomplish. Such systems are becoming an impor-

tant factor for competition in the mobile industry. These systems have a ability of

speech recognition (mostly English) and massive knowledge bases (e.g., the Wol-

fram Alpha knowledge base) which enable answering general information ques-

tions (e.g., who is the president of USA? or what is the weather in Paris today?)

or execute mobile phone related tasks such as sending messages, making phone

calls, or setting reminders. Most of these tasks are executed on a single mobile de-

vice. Moreover, they only support a limited number of predefined tasks which are

pre-built into the systems, therefore any changes of the tasks require updates of

the systems. That is, the users cannot add third-party tasks (e.g., their own tasks)

into the systems.

2.2.3 Summary

A fundamental feature of the universal remote systems is that the user interaction

with a smart environment is device-centric (i.e., the user selects device by device

to control them separately). With the device-centric interaction paradigm, the user

can only control one single device at a time by explicitly selecting the device they

wants to control. This becomes cumbersome if the smart environment has a large

number of devices, some of which the user may not be aware of that they are

controllable (e.g., the user may not be aware of a chair that can be controlled by

touching an NFC tag attached to it with a smartphone). This requires mental load

to map abstract selection to spatially distributed physical targets. Importantly, the

1http://www.apple.com/iphone/features/siri.html2http://www.google.com/landing/now/3http://www.samsung.com/global/galaxys3/feature.html#svoice4http://www.microsoft.com/en-us/tellme/

19

Page 37: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

device-centric interaction paradigm does not work well for the user when the user

wants to accomplishing tasks which can span across multiple devices and services.

Another feature of the universal remote systems is that their user interfaces

are function-oriented. A function-oriented user interface presents the user with all

functions of a device in terms of buttons, options, menus, modes, so on and leaves

the user alone in combining these functions to accomplish their tasks. Function-

oriented user interfaces do not provide any clues, guidance, or helps for the user

in planning and problem solving.

Most of the existing universal remote systems supports only simple device

commands (e.g., “On”, “Off”, and “Play”), not tasks which can have multiple

steps. Also, in a smart environment, user tasks are not always associated with

physical devices. Some tasks (e.g., borrow a book) may be accomplished by the

use of services provided by the smart environment, not by a physical device. The

device-centric interaction paradigm does not support for this situation.

Some of the systems require special hardware units (e.g., laser beam genera-

tors or RFID readers) attached on the controlling handheld computer and/or con-

trolled devices. Some of device selection techniques (e.g., laser beam or RFID)

require direct distance for selecting controlled devices.

We want to provide a system that allows end-users to focus on their tasks,

rather than on selecting devices and combining functions, that assists the user to

accomplish multiple-step, multiple-device/service tasks, and that does not require

the users to install additional software or attach any special hardware.

2.3 Task Guidance Systems

Task guidance systems or prompting systems [34] are designed to provide step-by-

step task guidance to assist individuals with the planning and execution of their

daily tasks. Prompting systems consist of breaking down a task into constituent

parts and creating prompts, consisting of images and instructions, for each step.

A prompting script is a set of prompts that make up a task.

20

Page 38: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

2.3.1 PEAT, ISAAC, Jogger, and ICue

PEAT [34] is a task planning and execution assistant that assists the users with

the planning and execution of their daily tasks. In PEAT, users define scripts for

routine tasks such as making a sandwich, paying a bill, the morning routine, or

going shopping. A script includes a sequence of sub-tasks each of which may

further include other sub-tasks. The user may specify a start time and an end

time for (sub-)tasks. Scripts can also contain choice points which identify where a

choice must be selected from a set of alternatives. Having the scrips as inputs, the

system automatically generates concrete plans for executing the specified scripts.

PEAT then assists the user in executing the plans by using visual and audible cues

to prompt the user through each plan step. The user simply acknowledges receipt

of one step to move on to the next step.

PEAT can provide task guidance but it can’t accept the user’s inputs (e.g., en-

tering a text, selecting a choice) during task execution. Steps in a PEAT scrip are

purely descriptive texts which have no bindings to actual device functions or ser-

vices. This makes it impossible for PEAT to perform actions on the user’s behalf

(e.g., turning on a light). PEAT also does not support automatic selection of a

choice from a set of specified alternatives, the user instead must manually make

such a selection. Because PEAT operates on a single computer, it does not provide

access to distributed users and essentially functions as an alarm clock.

ISAAC [48] and COGORTH [49] are similar systems which can provide users

with descriptive task guidance. ISAAC allows the user to create a task script on

one computer, then copy it to another computer (i.e., downloading from a desk-

top computer to a PDA). COGORTH allows the user to perform multiple tasks

simultaneously.

The Jogger system [50] and MEMOS [51] further allow the user to enter inputs

during task execution and uploading the inputs from the user’s computer to a

pre-known remote computer for outcomes tracking and analysis. This feature is

similar to our approach. But our approach goes beyond this feature by allowing

21

Page 39: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

collaboration between any distributed users who have been invited to participate

in the task execution.

ICue [52] improves PEAT by housing the task guidance software on a central

server and delivering only instructions as web pages on the user’s computer (e.g.,

laptop, PDA, and smartphone). This distributed architecture also allows remotely

managing and monitoring user performance and track whether the user is suc-

cessfully completing a task. ICue can adjust task instructions and branch from

one instruction to another based on user responses, including whether an action

or instruction has been completed. ICue task scripts are defined as a RAPS proce-

dure [53].

2.3.2 DiamondHelp

The DiamondHelp system [54] provides a mixed-initiative interface for the user

to control devices. It uses SharedPlan [55] as a conversational model between the

user and the interface. A conversational model includes a set of user utterances

(e.g., “What next?”, “Never mind”, “Oops”, “Done”, and “Help”) which have the

same meaning for all tasks. A conversational model can also include procedures

that accomplish high-level tasks in terms of concrete device operations.

DiamondHelp can only support single-device tasks. In other words, it does

not aim for controlling an entire smart environment of multiple connected devices.

DiamondHelp relies on designers to create the direct-manipulation portions of the

interface for each appliance. Unlike DiamondHelp, our system uses task specifi-

cations to generate user interfaces for user interaction with tasks.

2.3.3 Roadie

Also aimed to provide task guidance (i.e., mixed-initiative assistance for execut-

ing tasks on consumer devices), but the Roadie system [5] does not requires user-

defined task scripts, it instead uses the EventNet database [56] for generating step-

by-step guidance for the goal of a given task. EventNet, which represents temporal

22

Page 40: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

and casual relations between events, is computed based on the Open-Mind Com-

monsense Knowledge Base5 [57]. However, the generated task guidance cannot

always be trusted because the Open-Mind Commonsense Knowledge Base can

contain garbage data which a malicious or sarcastic user may have typed them in.

Roadie may also suffer from possible misinterpretation of natural language sen-

tences. Moreover, because the possible actions and goals are restricted to the con-

tents of the commonsense database, Roadie may not be able to support uncommon

actions or goals which are only supported within an enterprise or private smart

environment or goals which are related to a new class of devices/services which

have just been added to the environment. Note that, unlike TASKCOM, Roadie

only provides informative task (i.e., it cannot not actually execute the functions of

devices), hence, the user must manually manipulate the devices.

2.3.4 Summary

The existing task guidance systems can assist the users with the planning and ex-

ecution of their daily tasks. These systems mainly focus on providing descriptive

guidance and reminders. They do not provide mechanisms for binding actions or

primitive tasks to actual device functions or software services.

2.4 Task-Driven Systems

Task computing6 was pioneered by Wang and Garlan of CMU [23] and Fujitsu [24]

which seeks to provide a system (called task-oriented system) that allows users to

interact with computers in the form of high-level tasks while freeing them from

low-level configuration activities. In other words, task computing tries to shift

users’ focus towards what they want to do, and away from the specific means for

doing those tasks [24]. Tasks are the first class objects in such systems.

5A database of English sentences describing everyday life, contributed by volunteers on theWeb

6The similar terms are task-driven computing and task-oriented computing

23

Page 41: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

2.4.1 Task-Driven Systems in Software Industry

There are several limited forms of task-oriented systems in the software industry.

On the Windows operating system, when a user inserts a music CD to the CD tray,

a dialog is popped up and suggests the user tasks such as “Play CD” or ”Copy

Music”. The fundamental limitations of these task-oriented applications are that

they only support a limited number of those predefined tasks which are pre-built

into the applications, and that any changes of the tasks require updates of the

applications. Also the users are not allows to personalise the tasks, to define their

own tasks, or to add more tasks from different resources.

2.4.2 Aura

The Aura system [26] is a task-driven system which is developed based on the

proposal by Wang and Garlan of CMU [23]. It aims is to minimise distractions on

a user’s attention when they accomplish tasks in a smart environment, creating

a smart environment that adapts to the user’s context and needs. Aura heavily

relies on context reasoning, machine learning techniques, pre-defined rules, and

user preferences to provide automatic actions and adaptations. While context rea-

soning and machine learning techniques are not always reliable, the requirements

of pre-defined rules and user preferences make Aura unscalable.

Aura has a component called Prism that can support task migration across

smart environments. Aura describes a task as a virtual service which is a compo-

sition of other abstract services with QoS parameters. At runtime, Aura binds the

abstract services to actual services based on the QoS parameters. Aura then main-

tains the execution states of the task globally (which is similar to our approach,

i.e., TASKCOM also maintains task instances on global task servers). This allows

the task to be restored in different environments. However, to enable automatic

service bindings at runtime, there is a need for an agreed common vocabulary for

specifying services and QoS parameters. Such a common vocabulary and semantic

matching service algorithms are still suffering unreliable and unscalable problems.

24

Page 42: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

With the scalability and reliability problems mentioned above, in fact, Aura

was only deployed in well augmented environments which provide necessary fa-

cilities for context observation and inference. Its prototype implementation only

supports knowledge-related tasks (e.g., editing documents) on standard desktop

operating systems, such as Windows and Linux. It does not focus on user interac-

tion with smart environment, controlling devices, and providing task guidance.

2.4.3 TCE

Similar to Aura, where tasks are represented as compositions of services, Fujitsu’s

Task Computing [24] provides a system (called TCE) allows the user to accomplish

a task by either choosing a service or composing a complex service using multiple

available services. For example, a user can compose a “Contact Provider” ser-

vice and an “Add into Contact List” service to exchange a contact. One can also

compose a “Local File” service and a “View on Projector” service to show a pre-

sentation on a projector and control the presentation via an interface on the PDA.

This approach hides the complexity in the underlying service management and

allow users to compose complex services for their intended tasks. However, the

users need to spend time and effort to understand the goals of available services

to be able to correctly compose them for their desired tasks. When a number of

services increases, the users could be overwhelmed with too many services, that

may hinder them from selecting the right services for the right tasks. This situation

is similar as selecting a mobile application from current mobile application stores

(e.g., Apple’s App Store) where mobile users are currently overwhelmed with too

many applications for even a single task (e.g., managing to-do lists or shopping

lists).

Also note that, TCE can only support one-step compositions, i.e., the resulted

composite service cannot be composed further. Moreover, how to generate the

user interfaces for tasks are not mentioned.

25

Page 43: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

2.4.4 InterPlay

Similar to our system that aims to ease users in using smart environments, Inter-

Play [58] provides a pseudo-English user interface which allows a user to express

their task such as “Play the Matrix movie on the TV in the living room”. The sys-

tem then infers the user’s intent, provides suggestions, and automates the task

based on the descriptions of devices, the task’s contents, and the user’s prefer-

ences. Although Interplay provides a simple and intuitive means to help capturing

user’s intent, the user needs to learn how to express accurately their tasks in natu-

ral language. Our approach goes beyond providing a similar system with more ad-

vanced features for both end-users and developers, we also provide a framework

that supports development and deployment such system in smart environments.

Like our solution, InterPlay is claimed that it can suggest the user available

tasks based on devices’ capabilities, user’s location, and user’s preferences. How-

ever, we could not find in the publications how this feature was implemented and

evaluated. InterPlay operates based on descriptions of devices and generic specifi-

cations of tasks. Our approach does not require descriptions of devices but it does

require that devices must be provides with software services for controlling their

functions.

2.4.5 Olympus

Olympus [59] is a high-level programming model for smart spaces. The aim of

Olympus is not to address the usability problem of smart spaces. Instead, it aims to

facilitate programming applications for smart spaces by the use of a high-level pro-

gramming language. The high-level programming language allows developers to

program applications in terms of abstract entities (e.g., services, application com-

ponents, device functions, physical objects, locations and users) and pre-defined

operations (e.g., start and stop a component). At runtime, the Olympus frame-

work will resolve these abstract entities into actual entities based on constraints,

ontological descriptions of entities, the available resources, and space-level poli-

26

Page 44: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

cies. The key elements of the framework are ontologies for hierarchically specify-

ing entities and an algorithm for semantically discovering resources. The authors

also claim that their framework can also recover from failures of actions by us-

ing alternative resources. While this framework mainly aims at supporting the

rapid development of applications for smart spaces, it does not address the us-

ability concern of the end-users in smart spaces (while addressing this concern is

the main aim of this research). Our approach also supports rapid development of

task-oriented systems by using a high-level language for specifying task models.

2.4.6 Huddle

Huddle [60] aims to address the complexity of controlling a system of multiple

connected devices in a smart environment. Huddle uses content flows to logically

connect devices together for a given task. A content flow routes the content (e.g.,

movie or music) of a task from a content source to a content sink. For example,

in a home theatre, the content flows for a task of watching a movie are a DVD

player being “connected” to a television and to a stereo’s speakers. For each of

content flows, Huddle can generate a user interface that allows the user to control

the flow. For example, for the flow that connects a DVD player to a television,

the generated interface allows the user to control the movie experience such as

brightness, colour, and zoom level. For the flow that connects a DVD player to a

stereo’s speakers, the generated interface allows the user to adjust volume.

Huddle focuses more on the generation of user interfaces for controlling multi-

ple devices. It is designed to support multimedia tasks which rely on flows of data

such as audio or video data. In contrast, we focus on user interaction with smart

environments and assist the users in problem solving by providing task guidance.

2.4.7 ANSI/CEA-2018

ANSI/CEA-2018 [6] is a standard which provides a methodology that uses task

models at runtime to guide users accomplishing tasks. ANSI/CEA-2018 provides

27

Page 45: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

a standard language for describing task models which include input and output

parameters, pre-conditions and post-conditions, grounding scripts to device func-

tions, steps and sub-tasks, data bindings between sub-tasks, and applicability con-

ditions.

Having inspired from the ANSI/CEA-2018 standard, we extend and apply the

concept of using task model specifications at runtime to provide task guidance

for users to accomplishing their tasks in smart environments via/on intermediary

handheld computers (e.g., smartphones and tablets). We also provide a compre-

hensive framework for development and deployment such task-driven systems.

We add several mechanisms in order to provide a completed solution for the users

to discover, execute, and collaborate tasks.

We believe that the applicability of our framework is broader than this standard

because ANSI/CEA-2018 targets direct controlling of consumer electronics while

our framework can be applied for a network of devices and services (i.e., entire

environments), composite tasks, collaborated tasks, and remote device controlling

tasks.

2.5 Conclusion

We have reviewed a number of representative systems which has been developed

to better support users in accomplishing their tasks in smart environments. Some

of them have a capability of speech recognition (mostly English) and massive

knowledge bases (e.g., the Wolfram Alpha knowledge base and the Open-Mind

Commonsense knowledge base), currently they only reach to the stage where they

can only answer informative questions, remind tasks, or activate single-device

commands such as “Turn On”, “Turn Off”, and “Play”. Other systems heavily

rely on the capability of context recognition (e.g., the recognition of user activ-

ity [16]). Because techniques for context recognition often require a training phase

and streaming data. They are rather domain-specific and difficult to be deployed

widely in general settings. In fact, as far as is known, not many of such systems

28

Page 46: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

have been available in our everyday life.

We have found that the user interaction paradigm in the existing approaches

are device-centric (i.e., pointing to a single device to obtain its user interface in

order to control it, then pointing to another device, and so so). This paradigm is

based on the traditional assumptions that: (1) controlled devices are not connected

to each other to coordinate their functions, and (2) smart environments are closed

and personal (i.e., home and office where only the habitants pre-know and have

access to). Although the device-centric user interaction with smart environments

has been a dominant paradigm, from the end-user’s perspective, it has the follow-

ing limitations:

It does not scale well with the increasing number of controlled devices. A future

smart environment may have tens of different devices (perhaps some of which

users may not be aware of, especially in public smart environments). Hence,

locating and selecting a single device out of the jungle would be cumbersome

for users.

It does not cope well with frequent changes of smart environments. A smart en-

vironment can be rapidly configured by adding, removing, or upgrading de-

vices in it. Accordingly, to carry out tasks, a user must keep being aware of

these changes in order to know what devices are available and what func-

tions they provide. This is demanding of much attention, especially for oc-

casional visitors who come in to a public smart environment for a short time

and then leave.

It does not cope well with tasks involving multiple devices. Current devices op-

erate in isolation from others, especially for devices which are produced by

different manufacturers. They have no knowledge about the existence of

other devices. They can’t automatically discover and combine their func-

tions with other devices’ functions. Therefore, the user must manually com-

bine the required functions to fulfil a task. Indeed, the user must manually

split the task into sub-tasks and map them to the right devices, or even to

29

Page 47: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

the right functions. For example, a smart room has a light and a window

drape, each of which can be controlled via their own user interfaces. The

user’s task is to brighten the room with more natural light from the outside.

He/she firstly needs to split this task into two sub-tasks: one is to dim the

light and the other is to open the window drape. The user then maps the two

sub-tasks to the light and the window drape. Next, the user obtains the in-

terface for controlling the light, and obtains the interface for controlling the

drape. He/she may need to interleave between the two interfaces several

times to achieve the preferred level of the room’s brightness. This situation

is cumbersome and time-consuming.

Our approach is motivated by the limitations of the current device-centric in-

teraction paradigm aforementioned. In this thesis, we advocates the task-centric

user interaction with smart environments. Although several task-centric systems

have been developed, they are either focusing on task guidance (e.g., task guidance

systems), automatic generation of user interfaces (e.g., Huddle [60]), migration of

task instances (e.g., Aura [26]), or automatic task planning (e.g., Roadie [5]). Hav-

ing inspired by these different approaches, we build an integrated framework to

tackle the usability problem of smart environments and provide more added value

features for the end-users.

In the next chapter, we describe our framework that allows development and

deployment of task-oriented systems (i.e., the systems which implement the task-

centric interaction paradigm in smart environments. We believe that the task-

centric interaction paradigm can address the limitations of the device-centric in-

teraction paradigm and can provide a greater user experience in using smart envi-

ronments.

30

Page 48: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Chapter 3Research Problem and Proposal

In the previous chapter, we’ve reviewed the existing approaches which aim to

address the usability problems of smart environments. The main limitations of the

existing approaches are:

• They are designed to work with pre-known devices in a well-known smart

environment. Therefore, they do not scale well with the increasing number

of devices in smart environments.

• They only support the user to accomplish single-device tasks. There is a need

for supporting tasks which can span across multiple devices.

• Because they implement the application-centric paradigm, they do not sup-

port well for replication, composition, and customisation of a task.

This chapter presents an overview of the problem domain, the problems we

attempt to resolve, the overview of the proposed solution, and the plan we made

to design, implement, and evaluate the solution.

3.1 Usability Issues of Smart Environments

A smart environment [61] is “a physical world that is richly and invisibly inter-

woven with sensors, actuators, displays, and computational elements, embedded

31

Page 49: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

seamlessly in the everyday objects of our lives, and connected through a contin-

uous network.” There is a trend in smart environment technologies that elec-

tric appliances often come with a mobile application that the users can install on

their mobile devices (e.g., smartphones and tablets) to control the appliances re-

motely. In fact, the current paradigm of user interactions with smart environments

(which consists of controllable devices) is oriented around applications; so this

paradigm is called the “application-centric interaction paradigm” [62, 63]). With

this paradigm, to control each of devices, the user must download an install a del-

egated application onto their mobile computer. Each of the controller applications

then presents users a user interface which shows its functions in terms of but-

tons, modes, and menus. We called this interface a “function-oriented user inter-

face”. Figure 3.1) presents two examples of such function-oriented user interfaces

of the SAMSUNG Smart Washer/Dryer application1 and the LG TV Remote ap-

plication2. These mobile applications allow the user to control SAMSUNG Smart

Washers/Dryers and LG Smart TVs.

(a) (b) (c) (d)

Figure 3.1: Examples of function-oriented user interfaces. (a) and (b) are the userinterfaces of the SAMSUNG Smart Washer/Dryer application; (c) and (d) are theuser interface of the LG TV Remote application.

The application-centric interaction has been a dominant paradigm in desktop

computing for more than forty years. Two of the assumptions this paradigm is

based on are:

1http://apps.samsung.com/mercury/topApps/topAppsDetail.as?productId=0000003596482https://play.google.com/store/apps/details?id=com.clipcomm.WiFiRemocon

32

Page 50: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

• Users are usually stationary and work on long-term tasks using a personal

powerful desktop computer with a large screen.

• Many software applications can be installed permanently and run at the

same time on a desktop computer;

However, these assumptions are not always valid with mobile users in smart

environments:

• Users are usually mobile.

• Tasks are usually short-term, event-driven, and frequently interrupted.

• Mobile computers (e.g., smartphones and tables) usually have limited mem-

ory storage, battery, and screen size because of their small form factors.

• Because of the limited memory storage, the number of applications being

installed and running concurrently is limited.

• Unlike in personal desktop environments where desktops are usually per-

sonal and single-user environments, smart environments are usually multiple-

user and public environments.

Because of the above characteristics of mobile users and smart environments,

from end-users’ perspective, the application-centric interaction paradigm has the

following issues when applied on mobile computers for controlling smart envi-

ronments:

• It does not scale well with the increasing number of devices which need to

be controlled [23]. Devices can be upgraded and changed, so do the con-

troller applications, thus the users need to re-downloaded, re-installed, and

re-learned.

• It does not scale well with the increasing number of functions of an appli-

ance [5].

33

Page 51: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

• It does not support well for the user to accomplish tasks which span across

multiple devices [58, 60].

• It does not support well for the user to replicate, compose, and customise a

task.

In the following sub-sections, we elaborate these issues.

3.1.1 Many Devices in Smart Environments

Imagine that a future smart environment can have tens of devices3 which can be

controlled using a mobile application and a future mobile user may encounter

many smart environments in his/her life. With the application-centric paradigm,

for every device a user wants to control, he/she need to have a corresponding

mobile application4 installed on his/her mobile computer (e.g., a smartphone or a

tablet). Consequently, the user would have tens or even hundreds of applications

installed on his/her mobile computer. This finally leads to a jungle of application

icons on the user’s mobile computer (see Figure 3.2, for example). In this situation,

to accomplish a task (e.g., adjust the brightness of a room), the user must map the

task to the controlled devices (e.g., in this case, the ceiling light and the window

drape which can be controlled to adjust the brightness of the room), and then map

to the devices to the corresponding application icons on his/her mobile computer.

This mental process requires knowing, learning, and understanding the related

objects (i.e., which devices to be controlled, what functions to be activated, and

how the icons look like). These requirements for the users can become very tedious

and hopeless as the number of controlled devices and their functions continually

grows.

The user sometimes needs to uninstall applications which he/she no longer re-

quire to free the memory storage. Frequently the user is also required to update

3In this thesis, the terms “device” and “appliance” are interchangeable.4In this paper, an application means a mobile application which is used to control an appliance.

Because the mappings between appliances and applications in the context of our paper are one-to-one, “appliance” and “applications” are interchangeable in this paper.

34

Page 52: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Figure 3.2: Example screens of application icons on mobile devices. Left to right:iPhone4, Galaxy Nexus, and Nokia-N9.

existing applications when new versions are available (e.g., when a device has been

upgraded). The activities of finding, installing, uninstalling, and updating appli-

cations are time-consuming, cumbersome, and sometimes require expertness. For

occasional visitors who visit a smart environment (e.g., attend a conference in a

smart conference room) for a short time, these activities can be obstructive. The

question here is that if this obstructiveness can be eliminated so that mobile users

can just focus their tasks.

3.1.2 Many Functions of a Device

The user interfaces of applications for controlling devices are not directed towards

user tasks, but rather designed to present all their functions using icons, tabs,

menus, lists, buttons, and options (i.e., function-oriented user interfaces). For a de-

vice with rich functions, the application user interface usually has a deep menu hi-

erarchy and many options. Finding a particular function in a deep menu hierarchy

on a small screen of a mobile device is time-consuming and cumbersome [60]. The

question here is that if a user interface which is directed towards user tasks (i.e.,

task-oriented user interface) can be more effective for the users than the function-

oriented user interfaces.

35

Page 53: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

3.1.3 Multiple-Device Tasks

A multiple-device task is a user task which spans across multiple devices. For

example, the task of adjusting the brightness of a room is a multiple-device task

which involves two devices, a ceiling light and a window drape. Achieving this

type of tasks requires the devices to work in collaboration with others. However,

the application-centric paradigm does not support collaboration among devices.

With the application-centric paradigm, each controllable device has its own

controller mobile application which mostly operates in isolation from other appli-

cations and has no knowledge about the existence of others applications. There-

fore, it is impossible for an application to automatically discover and collaborate

functions which are provided by other applications for accomplishing a multiple-

device task.

Although the application-centric paradigm does has mechanisms (e.g., clip-

board and file extensions) for exchanging data (e.g., text and image) between dif-

ferent applications, it does not has a mechanism for sharing functions between

applications (e.g., without a prior knowledge about other applications, an applica-

tion is not able to discover and use functions provided by the other applications).

Therefore, to achieve a multiple-device task, the user must manually split the task

into sub-tasks and map them to the right applications, or even to the right func-

tions of the applications, and switch between the applications during the execution

of the task.

Let’s consider the task of adjusting the brightness of a room above, the light

and the window drape can be remotely controlled using two mobile applications

on a smartphone. To brighten the room with more natural light from the outside,

the user firstly splits the task into two sub-tasks: adjust the light and adjust the

window drape. Then, the user maps the first sub-task to the light controller appli-

cation and the second sub-task to the window drape controller application. Next,

the user runs the light controller application to adjust the light and the window

drape controller application to adjust the drape. He/she may need to switch be-

36

Page 54: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

tween these applications several times to achieve the preferred level of the room’s

brightness. This process demands much the user’s cognition.

To ease the user achieving the above task, one could develop an additional ap-

plication which integrates the functions of the light and the window that allows

the user to control two devices at the same time on one user interface. This so-

lution reduces the number of applications the user needs to run, eliminates the

process of splitting the task into sub-tasks and mapping them to different appli-

cations. However, this solution requires the user to maintain more applications

on his/her mobile computer which could make the issue of too many applications

even worse. The question here is that if the task-oriented user interface requires

less applications on the user’s mobile computer.

3.1.4 Replication, Composition and Customisation of Tasks

There are tasks which users frequently repeat at different times in different smart

environments. For example, the task of adjusting the brightness of a room is fre-

quently repeated every time the user gets in and out of a room at workplace or at

home. Reuse of knowledge for accomplishing similar tasks in different smart envi-

ronments can improve the efficiency of the user’s task accomplishment. However,

the application-centric paradigm does not support this. Indeed, an application for

controlling a device in one smart environment cannot be used to control a similar

device in another smart environment. Therefore, the process for accomplishing a

task in one smart environment cannot be repeated/reused for accomplishing the

same task in another environment.

There are also needs that the user wants to compose together functions pro-

vided by different devices to fulfil a certain task. The resulted composition may be

recorded to be re-used later or shared with other users. An existing composition

can also be customised to meet the user’s preference and the availability of devices

in a local smart environment. For example, given an application for switching the

lights on/off and another application for opening/closing the windows’ drapes

37

Page 55: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

of a smart home, the user wants to compose these functions achieve a task called

leaving home so that before he/she leaves home, he/she can quickly execute this

task to turn the lights off and close the drapes. He/she also wants to achieve a

similar task called arriving home so that when he/she arrives home, he/she can

quickly execute the task to turn the lights on and open the drapes. The application-

centric paradigm does not meet these needs because it does not allow the user to

compose functions provided by different devices unless additional applications

are developed to fulfil these needs.

To address these issues, we propose to apply the task-oriented paradigm (or

task-centric paradigm) for user interaction with smart environments. The aim of

task-oriented paradigm is to allow users to focus on their tasks rather than indi-

vidual devices and applications (i.e., the tools). Our hypothesis is that the task-

oriented paradigm can reduce users’ cognitive load in accomplishing their tasks

using devices in smart environments.

3.2 Overview of Proposed Solution

In this section, we outline the requirements of the proposed solution which we

want to achieve. We also highlight our assumptions behind the design of the pro-

posed solution.

With the aim to address the issues of the application-centric paradigm afore-

mentioned, we outline the following requirements which must be meet by the

proposed solution. These requirements are designed to ensure that a solution that

fulfils these requirements also addresses the mentioned issues of the application-

centric paradigm.

3.2.1 Requirements

• The solution must scale with many devices in smart environments. It should

provide a united user interface for the user to interact with smart environ-

38

Page 56: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

ments the user may encounter. A united user interface means a single and

consistent user interface. With this paradigm, tasks are the first class ob-

jects. There is no concept of applications (of course, we are not trying to

remove/replace all applications unless all tasks are achievable without ap-

plications). Therefore, application-related activities such as finding, down-

loading, (un)installing, updating applications are no longer necessary. There

is also not a need for other mental activities such as determining what de-

vices should be used for the task, splitting the task into sub-tasks and map-

ping them to the devices. An envisioned scenario of task-oriented interaction

with a smart environment is that: “The user wants to adjust the brightness of

the smart living room, he/she just tells the system (we assume that there will

be a system that implements this task-oriented paradigm and that provides a

united user interface for the user to interact with smart environments) some-

thing like “adjust brightness of living room”, then the system will show a

user interface with two sliders that the user can change the values to adjust

the light and the window drape. Note that, the actual representation of the

user interface and user input can be different depending on the available

interaction modality such as vision or audition.”

• The solution must scale with many functions of a device. It should provide

a user interface that is directed to user tasks, instead of functions of devices.

However, the combination of different functions could enable an exponential

number of tasks. To deal with this issue, the solution should allow users to

quickly express their intended tasks.

• The solution support achieving tasks involving multiple devices. It should

allow the user to achieve multiple-device tasks without a need of installing

any additional applications.

• The solution must support replication, composition, and customisation of

tasks.

39

Page 57: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

3.2.2 Assumptions

The task-centric paradigm is built on top of a service-oriented computing. Func-

tions of devices are provided as services (e.g., web services) which can be bound

into task models and will be invoked during the execution of the tasks.

3.3 Research Approach

3.4 Evaluation Methods

In this research, we use three evaluation methods: an analytical comparison with

existing frameworks, a user experiment in a simulated smart environment using

the prototype system, and a user survey.

Analytical comparison: We compare our approach with the existing approaches

from two perspectives: the system development perspective and the usabil-

ity perspective. From the development perspective, we compare our ap-

proach with the existing approaches in terms of features that support de-

velopment and deployment of task-oriented systems in smart environments.

From the usability perspective, we analyse advantages of TASKOS and com-

pare them again the existing approaches.

User experiment To evaluate to what extent our system is easy, effective, efficient,

satisfied, and learnable for users to interact with smart environments (i.e.,

to use operate electronic devices and pervasive services to accomplish their

tasks in smart environments), we conduct a user experiment. The user ex-

periment allows us to measure how users perform their tasks using TASKOS.

The participants are asked to accomplish a number of tasks with and without

using TASKUI. The performance of the subjects is measured using several

metrics, including the time to complete a task, the number of errors made

while attempting to complete a task, and how often external help was re-

quired to complete a task.

40

Page 58: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

User survey The user survey is conducted after the user experiment. The survey

allows us to gather the participants’s opinions, experiences, and perceptions

about the application-centric paradigm, the task-centric paradigm, TASKUI,

and the context-aware task suggestion mechanism.

Moreover, by implementing the prototype system and demonstrating its work-

ing, we can prove the feasibility and the practicability of development, deploy-

ment, and application of the proposed framework.

3.5 Implementation Approach

We develop a prototype system which includes a client application (TASKUI) and a

server application (TASKOS). The prototype system is to illustrate the feasibility of

implementing the proposed solution. The client application is a mobile application

that the users will use to interact with a smart environment to accomplish their

tasks. The server application manages a task execution engine, a task repository,

task instances, and users’ profiles. It also processes the user’s context to generate

suggestions of tasks for each of the users.

We use the XML language for specifying task models because XML can be

understood by both human-being and machine. XML specifications are semi-

structured data which can be exchanged and manipulated independently to un-

derlying platforms.

We use the Android SDK for implementing the task-based user interface ap-

plication which can be run on a large number of smartphones and tablets. We

use web and Java Servlet technologies to implement the server-side components

because they supports well for the service-oriented computing which our frame-

work is based on.

41

Page 59: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Chapter 4The TASKCOM Framework

In the previous chapters, we’ve described the usability problems of smart environ-

ments and the desired features for end-users which have not been meet by the cur-

rent application-centric paradigm. The main usability problem is the complexity of

use of smart environments. The desired features include providing task guidance,

supporting multiple-device tasks and user collaboration in task accomplishment.

We’ve also reviewed representative existing approaches which attempt to tackle

these problems. We’ve argued that the application-centric paradigm for user in-

teraction with smart environments has several limitations and provides limited

support for the desired features we have identified. We hypothesise that the task-

oriented user interaction paradigm would improve the user experience in smart

environments.

To test our hypothesis, we design and implement an integrated task-oriented

computing framework, we call it TASKCOM. We use this framework to apply

the task-oriented interaction paradigm in smart environments. In particularly,

TASKCOM provides a development and deployment methodology, a tool, and

systems that allow us to deploy the task-oriented interaction paradigm in smart

environments. In this chapter, we present the overall idea, the concepts, the de-

sign, and a system architecture of TASKCOM.

42

Page 60: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

4.1 Overview of the TASKCOM Framework

Our TASKCOM framework is an implementation of the concept of the task-oriented

interaction paradigm in smart environments. Its main goal is to provide a method-

ology and a tool for the development, deployment, and management of a task-

oriented system (TASKOS) for smart environments. TASKCOM also includes an

underlying runtime platform and a mobile application (TASKUI). TASKUI allows

end-users to interact with smart environments in the form of tasks. It provides a

unified system and interface for the end-users to access functions provided by the

smart environments. The tool allows programmers to configure their task-oriented

systems and smart environments to specific requirements. The runtime platform

manages task models, task instances (e.g., execution status of tasks), users, context

(e.g., users’ locations), service invocations, and generation of user interfaces (e.g.,

task guidance). TASKCOM also provides a specification schema that allows de-

velopers (or end-users) to specify their task models that can be validated using a

validation tool.

User

TASKUIapplication

TASKOSserver

Car environment

TASKOSserver

Home environment . . .

TASKOSserver

Workplace environment

Figure 4.1: Deployment architecture of TASKOS.

To give an overall idea of the framework, Figure 4.1 illustrates the deployment

architecture of TASKOS systems in smart environments. In this framework, an

end-user interacts with his/her smart environments using the TASKUI applica-

43

Page 61: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

tion. TASKUI is designed to allow the user interacting with as many smart en-

vironments (hence many devices) as he/she wants. Each smart environment is

embedded with one TASKOS sever system, but one TASKOS sever system can

manage more than one smart environments.

4.2 Concepts

In this section, we explain our concepts of entities in the framework.

4.2.1 Task Computing Framework

Our framework (we call it a task computing framework–TASKCOM) is a develop-

ment and deployment framework that implements relevant concepts, methodolo-

gies, schemas, and tools for development and deployment of TASKOS into smart

environments.

4.2.2 A Model of Smart Environments

Informally, any physical environments can be seen as smart environments. How-

ever, our concept of smart environments is not only physical environments which

are determined by a geographical boundary, but also virtual environments which

are determined by a name given by a group of users. A smart environment con-

tains things that can be used or manipulated. We define a model of a smart envi-

ronment by identifying the elements a smart environment can contain. Formally,

a smart environment is defined recursively as follow:

Definition 4.1 (Smart environment). An environment that contains a single device

(e.g., a light or a television) is a smart environment. An environment that contains a single

service (e.g., a web service which allows a user to book a taxi) is a smart environment. Then

an environment that contains other smart environments is a smart environment.

Examples of smart environments include a country, a city, a social community,

a university, a library, a room, a car, a smartphone, a television, a washing machine,

44

Page 62: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

and so on. In this thesis, we are interested in smart environments which contain a

number of devices and services, and which provide many functions which allows

the end-users to accomplish a number of tasks.

4.2.3 Tasks

There are many definition of a task in the literature (e.g., [6, 12, 23, 25, 64–67] but

there is no common definition of task. I adopt a definition that a task is a set of

actions performed collaboratively by users and machines to achieve the goal of the

task.

A task is deferent from a service. A task is a goal or objective while a service is

the performance of tasks. Tasks are associated with what the user wants to accom-

plish, and services with environmental capabilities to complete the tasks [25].

Task is not the same as problem though some people consider contrary. This

statement is justified by the fact that one can say “perform a task” but cannot say

“perform a problem”, which shows their inherent difference. A task as a sequence

of problem solving steps. Therefore, a task’s name necessarily includes a verb

representing problem solving activities [68].

A task may involve only services (e.g., send an email or book a taxi). Other

tasks may involve controlling physical devices (e.g., adjust a heater or turn on a

light). A task can be a composition of other tasks (called sub-tasks). For example,

a waking-up task could be a sequence of sub-tasks including turning on the bath

room’s heater, playing a song, opening window blind, and making coffee.

Tasks are the fundamental elements in TASKCOM. We define a task in our con-

cept by describing the parts which constitute a task. A task has a goal (which is

normally expressed by a human-understandable phrase, e.g., “Borrow a book” or

“Watch TV”). A user achieves the goal of a task by carrying out a sequence of steps

(i.e., the order is significant) or a set of steps (i.e., the order is insignificant), each

of which either requires accomplishing other tasks (called sub-tasks), or directly

operating an action on a physical device, or invoking a software service. In this

45

Page 63: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

thesis, we focus on user tasks which involves manipulating physical devices and

invoking software services.

A task may involve multiple devices and multiple users. We call a task that

has been instantiated to be carried out in a particular smart environment a task

instance. A user may at any time suspend a task instance and resume it later. Tasks

may be planned ahead to be triggered to start based on a criteria (i.e., applicable

context) such as time and location. A task instance can be handed over to another

person, or they can be shared to enable distributed collaboration.

4.2.4 Task Models

The required steps and data of a task form a conceptual model of the task, we call

it a task model [69]. A task model is an execution model or a routine of a task that

describes how the task should be performed to reach the goal of the task.

A specification of a task model1 is an actual document (e.g., an XML document)

written in a standard language which encapsulates all aspects of a task and which

can be validated and interpreted by a machine.

Task specifications can be shared between parties and combined in different

ways to produce composite task models. There can be local and global reposi-

tories which archive task models. A task model contains steps, instructions, and

conditions which tells end-users or machines how the task can be accomplished.

A model of a task can be delineated into sub-tasks. A task includes information

about interaction and logic, including the use of the services. The interaction in-

formation is used by the system to generate user interface and task guidance while

the logic is used to generate structure/navigation/flow of task. A task model can

include pre-conditions, post-condition, required services, required devices’ func-

tions, applicable context, choices.

To specify a model of a task, a programmer needs to answer several questions:

Can the task be a composition of other tasks? What is the flow through the task?

1Some authors call it a task model description [6].

46

Page 64: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

How does the task begin? How does it end? What is the instructions? What

manual actions are needed to perform the task? and What information is needed

to complete the task? Where does this information come from? By answering these

questions, the programmer will have specified a task.

4.2.5 Task-Based User Interfaces

We refer a task-based user interface to as a user interface which lets a user to in-

teract with smart environments in terms of tasks he/she want to achieve. Basi-

cally, a task-based user interface lets the user express a task he/she want accom-

plished. The interface captures the user’s intent and provides relevant guidance

for him/her to accomplish the task. A task-based user interface should not present

a static matrix of buttons (e.g., for controlling a particular device) or a rigid hier-

archical menu like traditional menus on desktop computers. TASKUI is an imple-

mentation of the task-based user interface.

4.2.6 Context-Aware Task Suggestion

A smart environment can provide an abundance of computing capabilities some

of which are embedded naturally into every objects [70]. A user may not recognise

the presence of computing capabilities available to them [71], especially in a public

smart environment. For example, there could be tens of computing devices and

services in a smart meeting room, some of the devices and services may not be

physically visible to users. Such a smart meeting room may support tens of tasks

which a user can accomplish. The question here is that how a user gets to know

that there are tasks he/she can accomplished in the current smart environments.

For example, how a user knows that music can be played in the meeting room. An

obvious solution to this question is to provide the user a list of supported tasks

for a smart environment. However, the number of the supported tasks can be

abundant and frequently changed (e.g., because of the change of the environment),

it is challenging to maintain such a list effectively and efficiently. We investigate

47

Page 65: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

further this solution to effectively and efficiently maintain the list of supported

tasks. We also provide a mechanism that lets users to quickly check and start a

task. We called this mechanism a context-aware task suggestion.

Our context-aware task suggestion [72] is based on the idea of context-aware

recommender systems [73–75] which aim to recommend a user with relevant in-

formation and services based on the user’s context. The context-aware task sug-

gestion suggests available tasks for a user based on the user’s context.

4.2.7 Pointing & Tasking Metaphor

An example of pointing and controlling metaphor is AIDE [42] that allows the user

to point a laser beam to a controlled device to select it, the system then brings up a

user interface that shows all functions of the controlled device. Our metaphor in-

stead brings up a list of possible tasks which can be done with the pointed device.

We aim towards scenarios where a user can retrieve a list of possible tasks when

he/she points to a particular object (e.g., a book, a television, a person, or a place).

Imagining that when you can point/select a book in a smart library by scanning

the barcode of the book using your smartphone’s camera, then you are suggested

a list of tasks including ‘Borrow this book’.

4.2.8 Relations Between Concepts

Smart En-vironment

TaskModels

TASKOS TASKUI

1

∗ 1 ∗ ∗

Figure 4.2: Relations between entities in TASKCOM.

Figure 4.2 illustrates the relations between the concepts of entities in our frame-

work aforementioned.

48

Page 66: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

4.3 Scenarios

We explain the ideas, features, and advantages of our approach by the means of

scenarios in which the end-users and developers use the system and tool which are

provided by the framework. The following scenarios demonstrate how our frame-

work works for end-users and software developers who are using our framework.

In other words, these scenarios present what we want to provide for the end-users

and software developers.

4.3.1 An End-User’s Scenarios

Bob is a lecture of a university in Melbourne. He is one of the end-users of our system. To

use our system, Bob’s installed a mobile application called TASKUI on his smartphone and

tablet. TASKUI is one of the components of our framework.

4.3.1.1 (U1) Pointing & Tasking Metaphor:

Bob is in his smart personal office. He “points” his smartphone to a printer-photocopier,

TASKUI displays a list of supported tasks which can be accomplished with the printer-

photocopier. If Bob points his smartphone to a TV, TASKUI displays a list of supported

tasks which can be accomplished with the TV. Figure 5.26 depicts these task suggestions. If

Bob selects one of the supported tasks, TASKUI will guide him through steps to accomplish

the selected task.

This scenario demonstrates the pointing & tasking metaphor in which the smart-

phone acts as a universal remote control as well as a task-based user interface.

4.3.1.2 (U2) Task Suggestions Based on User’s Location:

Bob’s entered into the university campus zone, he consults TASKUI for available supported

tasks. TASKUI shows him a list of tasks supported by the university (see Figure 5.28.c).

Bob selects the “Find a parking spot” task which will then guide him to an available car

parking spot. Bob parks his car and goes to his personal office. Bob steps into the office

49

Page 67: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

and consults TASKUI for supported tasks which allows him to control devices in the office.

TASKUI shows him a list of support tasks as shown in Figure 5.28.d.

This scenario illustrates an ability of our system that helps end-users in build-

ing up their mental model of a smart environment, and that helps them being

aware of available tasks.

4.3.1.3 (U3) Tasking with Multiple Devices:

Bob is in his office. He “tells” TASKUI that he wants to brighten the room. TASKUI

presents him a screen with two sliders: one for adjusting the light and the other for adjust-

ing the window drape (see Figure 5.18.b). Note that, this is just possible for him without

requiring him to install any mobile applications (except TASKUI) or configuration on his

smartphone.

4.3.1.4 (U4) Guidance, Navigation, and Multiple Step Tasking:

Bob’s just been provided a new coffee maker in his office which can be controlled remotely

via TASKUI. He asks TASKUI to make him a cappuccino (e.g., by selecting the “Make

cappuccino” task on a list of supported tasks). Again note that, this new task automati-

cally appears on the supported task list without any configuration required for Bob. We

will show how this can happen later. Bob does not need to consult the coffee maker’s

manual because TASKUI will guide him through steps to make the coffee he prefers (see

Figure 5.20).

Users often have multiple tasks at the same time. The tasks are often not

solved in a single session. This scenario shows that our system allows users to

switch/interleave between tasks, suspend and resume tasks.

4.3.1.5 (U5) Suspending and Resuming Task Instances:

Bob is going to attend a conference in Sydney. While waiting in his office for a taxi coming

to take him to the air port, he starts to book for accommodation using TASKUI on his

tablet (perhaps he prefers the tablet because he wants to have a better view of the photos

50

Page 68: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

of accommodations). While he is proceeding through the steps for making selection and

booking, the taxi comes. He suspends the booking task on the table and leaves it behind. He

takes the smartphone with him and resume the accommodation booking using TASKUI on

the smartphone. The suspension and resumption of a task like this accommodation booking

task are as difficult as Bob clicks a button. No more than that except Bob must use the same

username for TASKUI on both the smartphone and tablet is required.

4.3.1.6 (U6) Tasking in Unfamiliar Smart Environments:

Bob’s arrived in Sydney. It is a capital city and that has changed a lot in terms of providing

a better experience for visitors since the last time Bob visited it. One of the changes has been

made for Sydney is that it has been integrated with our system (i.e., TASKOS) to provide

visitors task-oriented user experience in the exploration and use of its provided services.

Bob’s has been advised of this and right after arrival in Sydney, Bob asks TASKUI to con-

nect to the local TASKOS system by which Bob starts to receive suggestions of supported

tasks such as checking public transport timetables, booking local taxis, finding directions,

interacting with public large displays, so on. Asking TASKUI to connect to the local task

system is as difficult as entering a URL of that task system. Bob comes in the conference

place, a registration desk receptionist advises him to connect his TASKUI to the temporary

task system which is set up for this conference. Having connected to the conference’s task

system, Bob’s kept being aware of supported tasks which allow him to control provided

devices and services during the course of the conference. For example, there is a task that

allows him to easily browse his presentation file, project it, and control the slideshow. There

is a task that allows him to exchange his contact with others. Other tasks allows controlling

lights, music players, televisions, coffee machines provided in the conference rooms.

4.3.1.7 (U7) Task Collaboration and Sharing:

At the conference, Bob is interested in a book title which is referenced to by one of the

presentations. Bob wants to have this book to read on the weekend when he comes back. Bob

decides to remotely borrow this book with the help of his colleague (Alice), who is working at

his university. To achieve this, Bob asks TASKUI to start the borrowing book task which is

51

Page 69: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

provided by his university’s task system (note that TASKUI already has a connection with

the university’s task system). The borrowing book task allows Bob to search and locate the

book in the library (see Step 1, 2, 3, and 4 in Figure 5.19). Having found the shelf where

the book is located, Bob decides to share the task to Alice (Figure 5.22.b) so that Alice can

help him to check out the book. Once Alice’s received the notification of this shared task

(Figure 5.22.c and 5.22.d), she can help Bob to continue the task, e.g., taking the book from

the shelf and scanning the barcode of the book using the camera of her mobile device (see

Step 5 and 6 in Figure 5.19). While Alice is continuing the task, Bob can observe and check

the progress synchronously if he wants. Once Alice has finished scanning the barcode of

the book, she shares the task back to Bob so that he can finalise the task such as scanning

his library card to check out the book (see Step 7 and 8 in Figure 5.19). Note that, while

the borrowing book task has not finished yet, Bob can still switch to other tasks if he wants.

When Bob comes back to an unfinished task, TASKUI resume the task to the step where it

has been left previously.

4.3.1.8 (U8) Distributed Observation of Task Accomplishments:

While attending the conference in Sydney, via TASKUI, Bob is still able to observe the

accomplishments of the biology laboratory tasks which have been scheduled and assigned to

a group of his supervised students. Each of these tasks requires the students to accomplish

a sequence of steps which involves the use of the facilities and appliances provided in the

laboratory. The students follow task guidance/instructions shown on the TASKUI screen

to accomplish the required steps and report the results. By sharing these tasks between Bob

and the student group, Bob is able to remotely monitor the student progress and also to

provide help if needed.

4.3.1.9 (U9) Tasking with Zero-Configuration:

Bob comes back to his university after the conference. He enters his office. As usual, he

consults TASKUI to control the light of his office (e.g., by pointing his smartphone to

the light). This time TASKUI presents him a different task list that includes a new task

called “Change the light’s colour” that allows him to change the colour of the light such

52

Page 70: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

as warm white and cool white. TASKUI also notifies him of a new supported task called

‘Make espresso” using the existing coffee maker in his office. Note that, these two tasks

are automatically available for Bob with zero-configuration required for him. In fact, these

two tasks have been added into the underlying task system by the developers while Bob was

away for the conference.

This scenario shows that with the task-oriented paradigm, the concept of “up-

grading” software may quickly become anachronistic.

4.3.1.10 (U10) Remote Controlling:

On another day, Bob’s left home and is taking a tram to work. He wants his office to be

warm when he arrives. He asks TASKUI “turn on heater” (Note that the current design

of our system does not support commands like “turn on heater in my office”. We will

justify this design decision later). In response, TASKUI confirms Bob which heater he

wants it be turned on because Bob has one heater at home and one heater at the office. This

confirmation is necessary because Bob is currently on the tram, he is neither at home nor

office. Therefore, TASKUI has no idea about the currently active smart environment where

it should map the selected task to. Moreover, TASKUI has found two similar turning-on-

heater tasks which are provided by the current task systems (his home’s task system and

the university’s task system) it currently connects to: one is to control a heater at home

and the other is to control a heater at the office.

4.3.2 A Developer’s Scenarios

Carol is a developer who uses our system for managing supported tasks by the university

which includes the library, buildings, rooms, or even individual devices such as a coffee

maker.

4.3.2.1 (D1) Adding Tasks:

When a new device or service is installed, Carol also adds the tasks which are supported

by this device/service into the system. By which, the end-users like Bob can discover the

53

Page 71: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

new tasks and execute them if they want. Similarly, when a existing device or service

is removed, Carol also removes the corresponding tasks. When a device or service is up-

graded, Bob also updates their associated tasks. These configurations do not require any

intervention from the end-users. The interested end-users are notified of the availability

of the supported tasks. We assume that each of devices and services provides several basic

tasks which are supposed to be supported by the device/service. For example, a light may

provide two basic tasks like turning on and off.

4.3.2.2 (D2) Defining Composite Tasks:

Besides the basic tasks which are supported by individual devices or services, Carol can

define a new task by composing the existing tasks in a particular sequence. Carol combines

a light’s adjusting task and a window drape’s adjusting task to provide a new task that

allows Bob to adjust his room’s brightness by adjusting the light and the drape at the

same time for a combined effect.Note that, without the composite task, Bob needs to

execute two different tasks (i.e., the light’s adjusting task and the drape’s adjusting

task) and switches between them to achieve the combined effect.

Carol’s added a new supported task that allows the end-users to print documents via

the printers across the university’s campus. When Bob enters into the university’s cam-

pus zone, he receives a notification on his smartphone that lets him know about this new

possible task.

This scenario shows that with TASKCOM, a smart environment is programmable.

It is an ability of the framework that allows developers to dynamically configure

a smart environment to the specific requirements without required intervention

from the end-users (e.g., (re)installation or configuration of softwares from the end-

user side). With this feature, the smart environment is not be bogged down by the

numerous low-level details of a static set of supported tasks.

54

Page 72: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

4.4 Architecture

Figure 4.3 illustrates the conceptual component and deployment architecture of

the framework. The architecture consists of task servers and TASKUI instances. A

task server hosts a runtime system TASKOS which is fitted with a particular smart

environment (e.g., a university, a conference room, a theatre, a home, or even a

coffee maker). A task server has several components some of which are a user

profile manager, a task execution engine, a task repository manager, and a task

suggestion engine. We describe how these components are implemented in the

Implementation chapter. Here, we explain their purposes in the framework.

Mobile device 1 Mobile device 2 Mobile device m

TASKUIClient

TASKUIClient

. . .TASKUIClient

Networks (e.g., WiFi, 3G, GPRS,. . . )

Task Server 1 Task Server 2 Task Server n

Task ExecutionEngine

Task ExecutionEngine

. . .Task Execution

Engine

Task SuggestionEngine

Task SuggestionEngine

Task SuggestionEngine

Task RepositoryManager

Task RepositoryManager

Task RepositoryManager

User ProfileManager

User ProfileManager

User ProfileManager

Taskmodels

Taskmodels

Taskmodels

Taskmodels

Taskmodels

Taskmodels

Taskmodels

Taskmodels

Taskmodels

Figure 4.3: Conceptual component and deployment architecture.

Note that, a TASKUI instance can connect to many task servers at the same

time. For example, a university has its own task server that allows the staff and

students to accomplish the university’s supported tasks. A smart conference room

may also has its own task server which supports tasks for controlling appliances

within the room. An end-user may also have his/her own personal task server

that allows him/her to carry out personalised tasks.

55

Page 73: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

4.4.1 TASKUI

TASKUI is a mobile client-side software which runs on intermediary handheld

computers (e.g., smartphones) which acts as a user interface for a user to interact

with their task instances which are supported by the connected TASKOS servers.

TASKUI communicates with the task execution engine to exchange messages which

contain information for generating user interfaces for tasks. TASKUI can connect

to multiple task servers concurrently, allowing users to access tasks hosted on dif-

ferent task servers at the same time. Users can connect to or disconnect from a par-

ticular task server if they want. TASKUI provides a task-based user interface which

presents the user with suggested tasks, instructions for accomplishing tasks, and

other task features (e.g., next, back, cancel, skip, and share). TASKUI can provide

the same look and feel user interface experience for the users because it reuses na-

tive user interface elements which are provided by the hosting mobile platforms.

One of the assumptions we make here is that people are increasingly carrying

handheld computers. Many of handheld computers already have networking and

computational capabilities. Handheld computers often have built-in sensors and

speech recognition capability which can be used to gather the user’s context and

inputs while the user is accomplishing a task.

4.4.2 TASKOS

TASKOS is a server side application which is associated with one or more smart

environments to manage tasks supported by the associated smart environments.

TASKOS lets administrators, developers or users define, validate, and manage task

model specifications for a smart environment. Any changes of the tasks spec-

ifications will be finally synchronised with the connected TASKUI components.

The following diagram visualises the relations between conceptual entities within

TASKCOM: smart environments, task models, the TASKUI instances for end-users,

and the TASKOS back-ends. Specifically, each of smart environments is embed-

ded with an instance of TASKOS; an instance of TASKOS can support multiple

56

Page 74: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

task models; an instance of TASKOS can be connected by multiple instances of

TASKUI; and an instance of TASKUI can connect to multiple instances of TASKOS.

TASKOS acts as a bridge that links user tasks and available functions of a smart

environment via task models and that shields the users from variations in device

and service availability.

4.4.3 User Profile Manager

The user profile manager manages user registration and authentication. We need

user registration and authentication because we want registered users to be able to

manage their own task instances. For example, they can share their tasks to other

users and they can get personalised suggestions of tasks. By this, at the same time,

one user can have many instances of tasks on a particular task server and one task

model can be instantiated for many users.

4.4.4 Task Execution Engine

The task execution engine is a runtime engine which validates, executes task mod-

els, and manages the life-cycle of task instances (e.g., instantiation, suspension,

resumption, collaboration, and termination of tasks). The task execution engine

communicates with TASKUI to exchange data messages during accomplishing

tasks. The exchanged messages contain information about how to generate the

user interface for a particular step of a task. The user interface for a step repre-

sents instructions and graphical user interface controls for accepting inputs from

the user.

4.4.5 Task Repository Manager

The task repository manager allows developers to manage task models for a smart

environment and its nested environments (e.g., define, add, remove, update task

models). It also provides a tool for validating task models conforming to our

schema.

57

Page 75: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

4.4.6 Task Suggestion Engine

The task suggestion engine generates the real-time suggestion of tasks for a user

based on their context (e.g., location). For example, when a user changes from

his/her car “environment” to his/her home “environment”, the home relevant

tasks are suggested while the car-relevant tasks are hidden (or have lower priority

when shown on the task list). Then, when the user is pointing2 his/her mobile

device at the heater, the heater-relevant tasks are suggested for him/her.

4.5 Task Modelling

Task modelling is the process of developing and describing task models. The

result of this process are specifications of task models which represents tasks.

In TASKCOM, task models are specified in a standard, machine-readable format

which we call task model specifications or task models for short. Task models are pri-

mary input elements in TASKCOM. They are defined by developers or end-users.

A task model specification is an XML document which specifies the task proper-

ties, task decomposition (i.e., steps to execute the task), inputs, outputs, required

services, conditions, instructions, and user interface representation of the task and

its sub-tasks.

Our general idea behind TASKCOM is that we use task specifications to map

users’s high-level tasks to low-level services and devices’ functions and we use

the same task specifications to generate task guidance. Our specification schema

provides the following features:

Reuse of task specifications. A new task specification can be created by compos-

ing existing task specifications. This is achieved by the use of services and/or

task references within task specifications.

Dynamic task decomposition. The decomposition of a task can be dynamically

2Where pointing could mean the use of compass and location reasoning, the mobile camera withimage recognition (in the augmented reality style), RFID technology, NFC technology, ultrasoundor infrared technology.

58

Page 76: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

added or removed at run-time by the use of services and the back stack.

Specifically, an invocation of a service can return a new task specification de-

pending on the provided arguments. The returned task specification will be

added to the task decomposition tree as a sub-task by replacing that service.

The back stack in TASKCOM allows the user to go back to the previous step

which could result in removing the sub-task from the task decomposition

tree if that sub-task was added previously as a result of a service invocation.

Automatic generation of user interfaces for tasks. Based on task model specifi-

cations, TASKUI can generate user interfaces which present instructions for

the user to execute the tasks. TASKUI uses native user interface elements

and third-party components for representing information to the user and re-

questing user inputs. By the use of native user interface elements, TASKUI

ensures that it does not standardise the look and feel provided by different

mobile platforms. And by the use of third-party components, TASKUI hides

the concept of applications and invokes the required components (which are

provided by the third-party applications) on demand. For example, a map

application may have many components, one of which is to compute and

show a route on the map. TASKUI will call this component whenever it needs

to compute and show a route on the map. Similarly, TASKUI can start a bar-

code reader application whenever a task requests the user to scan a barcode.

This feature relies on our assumption that these components are discoverable

and they are allowed to be started from outside their main applications. On

the Android platform, this assumption is satisfied.

4.5.1 Task Composition and Decomposition

Task composition lets developers and users create new task specifications by com-

bining existing task specifications and ordering them to best suit their require-

ments. Task decomposition is the reverse of task composition. In TASKCOM, task

models are hierarchical. A model of a task is a composition of other task models

59

Page 77: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

(called sub-tasks). In turn, each of these sub-tasks is decomposed further unless the

sub-task is an “action” (which is a manual operation on an actual appliance) or a

service call. The representation of a task model is a tree (we call it a “task tree”)

where the root is the task itself, the nodes represent the sub-tasks, and the edges

represent the composition relations between these tasks.

Make Coffee

Select Coffee Type

Make Cappuccino

EnterNumber

ofsugars

WaitTakethe

coffee

Make Espresso

Enternumber

ofsugars

WaitTakethe

coffee

Figure 4.4: A graphical model of a “make coffee” task. Dotted lines indicate de-composition choices.

Figure 4.4 contains a graphical presentation of a “make coffee” task model for

coffee machines. This task model has a composition choice which allows the user

to select the type of coffee he/she would like to make. Depending on the user’s

selection, a “make cappuccino” task or a “make espresso” task will be actually

executed next.

4.5.2 Task Refinement

Task refinement allows the user to gradually refine a general task to a more specific

task at runtime. For example, the user can start with a “control device” task and

finally refine this to a “set TV channel” task. TASKCOM supports this feature

by the use of service calls in task specifications. In particular, the invocation of a

service can return another task specification which immediately replaces it for that

service in the current task tree. We also call this feature dynamic task composition.

Formally, a task t′ is a refinement of a task t if the execution of t returns t′, we

denote this relation by R(t, t′). Task refinement is irreflexive, anti-symmetric, and

transitive. The irreflexivity means that no task is a refinement of itself, in other

60

Page 78: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

words, there is noR(t, t). The anti-symmetry means that for all tasks t and t′ with

t 6= t′, if R(t, t′) then R(t′, t) must not hold. The transitivity means that if R(t, t′)

andR(t′, t′′) thenR(t, t′′).

4.6 Task Execution

The user accomplishes a task by interacting with the Task Execution Engine (TEE)

via the TASKUI interface (TASKUI is a mobile application that runs on the user’s

mobile device and TEE is hosted on a task server.

4.6.1 Model of Task Execution Engine

Figure 4.5 shows the main components and the data model for TEE. TEE commu-

nicates with the user profile manager to maintain a model of current users. Each of

the user are associated with their current task instances. TEE is designed to handle

all task instances at the same time. The XML parser is used to validate and load

XML-based task models into task instances for execution. The script engine is used

to evaluate condition expressions (e.g., boolean expressions which are specified in

task models) at runtime. Our current implementation uses the Rhino ECMAScript

engine3. Rhino is an open source JavaScript engine which is developed entirely in

Java and managed by the Mozilla Foundation.

Users

XML ParserTask Execution

EngineScript Engine

Figure 4.5: Main components and data model of the task execution engine.

3http://www.mozilla.org/rhino

61

Page 79: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

4.6.2 User Model

Figure 4.6 shows the simplified user model in TASKCOM. Each user first regis-

ters with the system using a unique user name (ID). A user at a time has a set of

suggested tasks, a set of live task instances, a set of shared task instances, and the

currently active task. The set of suggested tasks is updated on change of the user’s

context as described in the previous section.

User IDActiveTask

SuggestedTasks

Live TaskInstances

UserShared Task

Instances

Figure 4.6: The simplified user model in TEE.

4.6.3 Model of Task Instances

Figure 4.7 shows the simplified data model of a task instance in TEE. TEE creates

an instance of the task once the user starts to execute that task. A task tree repre-

sents the decomposition of the task at runtime. The back stack is used to enable the

feature that allows the user to go back to a previous sub-task. The variable table

is a hash table which stores pairs of 〈key, value〉 which are returned from invoca-

tions of services or the user’s input. These values are passed into expressions or

between sub-tasks. The current sub-task of a task instance is a pointer that points

to a sub-task (i.e., a node) within the task tree where the user is currently executing.

Task IDBackStack

VariablesTable

Task TreeTask

InstanceCurrentSubTask

Figure 4.7: The simplified data model of a task instance in TEE.

62

Page 80: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

4.6.4 Lifecycle of Task Instances

When the user starts a task, switches between tasks, or navigates through the sub-

tasks of a task instance, the task instance transitions between different states in

their lifecycle. For example, when the user starts to execute a task for the first

time, the user interface of the task comes to the foreground and receives user focus.

During this process, TEE calls several procedures to create the task instance and

to present its first sub-task for the user to interact with. If the user switches to

another task, TEE suspends the current task and moves it into the background

(where the task is no longer visible, but the task instance and its state remains

intact). Figure 4.8 illustrates the lifecycle of task instances in TEE.

ACTIVE

STARTED INACTIVE DONE

execute()

resume() suspend()resume()

next(), back(),share(), skip()

done()

destroy()

Figure 4.8: The lifecycle of a task instance, expressed as a state transition diagram.

4.6.4.1 Starting A Task

When the user selects a task to execute or to continue (from either a task sug-

gestion or a task search result), TEE calls the execute() method as shown in

Procedure 4.1. If the task has not been started, TEE loads the corresponding task

specification, creates an instance for that task (i.e., instantiates the corresponding

task tree), and assigns the current sub-task variable to the root task. Finally, TEE

resumes the task instance in order to move the current sub-task to the foreground

for the user to interact with.

63

Page 81: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Algorithm 4.1 execute(): The user starts executing a task given a taskID.

Require: taskID.

1: if (liveTaskInstances.contains(taskID)) then2: if (activeTask.taskID 6= taskID) then3: activeTask.suspend();4: activeTask← liveTaskInstances.get(taskID);5: end if6: else7: if (activeTask 6= NULL) then8: activeTask.suspend();9: end if

10: taskTree← loadTaskSpecification(taskID);11: taskInstance← new TaskInstance(taskTree);12: liveTaskInstances← (liveTaskInstances ∪ {taskInstance});13: activeTask← taskInstance;14: taskInstance.currentSubTask← taskInstance.getRoot();15: taskInstance.backStack← φ;16: taskInstance.variablesTable← φ;17: taskInstance.state← STARTED;18: end if

19: activeTask.resume();

4.6.4.2 Navigation Through A Task

The user interacts with TEE via TASKUI which runs on a mobile device. The user

interface for a task instance has several commands such as “Next”, “Back”, “Skip”,

and “Share” which allow the user to interact with the task. The user’s input (in-

cluding the command he/she’s selected) is sent to TEE for processing. Proce-

dures 4.2, 4.3, 4.4, and 4.5 present the pseudocode, which shows how TEE handles

the user’s commands with a task instance. In particular, the next() method han-

dles the Next command. It first executes the current sub-task with the arguments

which are supplied by the user, then the pushes the current sub-task into the back

stack as a mechanism for the Back command. Next if there exist uncompleted

sub-tasks, the next uncompleted sub-task is executed to generate the user inter-

face for it, otherwise the root task is seen to be completed and will be destroyed.

The back() method handles the Back command. It first checks if the back stack is

empty (i.e., there is are no previously completed sub-tasks of the current task), it is

seen that the user wishes to cancel the current task and will be destroyed. Other-

wise, the previous sub-task (which is popped from the back stack) is undone (e.g.,

64

Page 82: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

simply set its status to “UNDONE”), and re-executed to generate the user interface

for it. The skip() method handles the Skip command which is available only if

the current sub-task is specified “optional”. This method simply sets the status

of the current sub-task (and its sub-tasks if they exist) to “DONE”. The share()

method handles the Share command which is available only if the current sub-task

is specified “sharable”. The method gathers the information of the sharee then

sends a push notification to the sharee’s mobile device. The notification includes

the shared task’s data which allow the task to be resumed on the sharee’s mobile

device.

Algorithm 4.2 next(args): The user executes “next” of a given activeTask.

Require: args: Values provided by the user.

1: currentSubTask.done(args);2: backStack.push(currentSubTask);3: currentSubTask ← getNextUnDoneSubTask();4: if (currentSubTask 6= NULL) then5: currentSubTask.run(); ⊲ Generate a message and send it to TASKUI.6: else7: destroy();8: end if

Algorithm 4.3 back(): The user executes “back” of a given activeTask.

1: if (taskStack.isEmpty()) then2: liveTaskInstances← (liveTaskInstances \ {activeTask});3: destroy();4: else5: currentSubTask ← backStack.pop();6: currentSubTask.undo();7: currentSubTask.run(); ⊲ Generate a UI message and send it to TASKUI.8: end if

Algorithm 4.4 skip(): The user executes “skip” of a given activeTask.

1: currentSubTask.state← DONE;2: for all t ∈ currentSubTask.getSubtasks() do3: t.state← DONE;4: end for5: next(NULL);

65

Page 83: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Algorithm 4.5 share(shareeID): The user executes “share” of a given activeTask.

Require: shareeID: The ID of the user who will get involved in the task.

1: sharee← getUser(shareeID);2: sharee.addSharedTaskInstances(activeTask, sharer);3: pushNotification(sharee, activeTask, sharer);

4.6.5 Resuming Tasks on Different Mobile Devices

Because task instances are completely stored on the task servers and TASKUI only

acts as the user interface of the tasks, the user can resume a task instance on any

mobile device provided that the mobile device has TASKUI installed. This works

like the concept of remote sharing one desktop screen on multiple monitors on the

Windows operating system. For example, a user starts a task on a tablet, then for

some reason, he/she would like to continue the task on a smartphone.

4.6.6 Collaboration Between Distributed Users

Our framework supports the coordinated mode of user collaboration. This mode

is used when a sharer wants all participants to share the same view of an ongoing

task and they all can manipulate the task at the same time (e.g., provide inputs, go

back, and go next).

66

Page 84: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Chapter 5Implementation

In this chapter, we present an implementation of the TASKCOM framework. The

TASKCOM framework provides a methodology, a tool, and system components

for development and deployment of the task-oriented paradigm for user interac-

tion with smart environments.

5.1 Task Modelling And Specification

In this section, we present how we model a task and how a task model is specified.

We model tasks based on a task ontology (presented in the next sub-section). We

specify task models using an XML language. The specified task models are then

verified based on a RNC schema which conforms to the proposed task ontology.

5.1.1 Task Ontology

“Ontology is composed of two parts: taxonomy and axioms. Taxonomy is a hier-

archical system of concepts and axioms are established rules, principles, or laws

among the concepts” [76]. The ultimate goal of task ontology is to provide vo-

cabulary necessary and sufficient for building a model of tasks. It forms the core

schema for specifying and validating task models in TASKCOM.

There have been several task ontologies proposed in the literature. For exam-

ple, task ontology for clinic procedures [77] contains a hierarchy of tasks and sub-

67

Page 85: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

tasks for determining cerebrovascular disorders and other illnesses; task ontology

for geospatial data retrieval [78] is used as a guide for finding relevant geospatial

data; and task ontology for modelling users’ activities in navigating mobile Inter-

net services [79].

We propose a task ontology by analysing structures and characteristics of typi-

cal tasks in the context of smart environments. The proposed ontology is designed

to meet the requirements and objectives of the framework as outlined in the sce-

narios in Chapter 4. We prove the completeness of the ontology (in the context of

our framework) by demonstrating that it can be used to model a diversity of tasks

in smart environment (several examples of task models are given in following sec-

tions).

Shareable? Description Task ID

Condition Task Service

Sub-taskUser

InterfaceOptional?

is has has

∗meets

∗requires

compose

s ofishas

Figure 5.1: Overview of the proposed task ontology. The boxes represent conceptswhile the arrows connecting between boxes indicate relationships between the cor-responding concepts. An asterisk (∗) label at the ending of an edge indicates thatthe task can have multiple instances of the corresponding concept.

The proposed task ontology is illustrated in Figure 5.1. The task ontology in-

cludes nine concepts: Task, Task ID, Task Description, Shareability, Optionality,

User Interface, Condition, Service, and Sub-Task. The Task ID is used a reference

to a task by a machine. The Task Description includes a verbal description of the

goal of a task. The Shareability attribute is used to indicate if a task can be shared

(collaborated) with other users. The Optionality attribute is used to indicate if a

68

Page 86: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

task is optional (i.e., can be skipped). The User Interface includes instructions and

inputs of a task. The user interface of a task does not include the user interfaces

of the sub-tasks because the sub-tasks themselves have their own user interfaces.

The TASKUI client software will use the specification of the user interface to gen-

erate the actual graphical user interface on a mobile device for the user to interact

with the task. The Condition is an expression which will be evaluated at runtime.

A condition is used when there are two options (e.g., two sub-tasks) to be selected.

The selection depends on the result of the evaluation of the condition. The Ser-

vice specifies services required by a task. Finally, the Sub-Task is used to specify

sub-tasks of a task. A sub-task is a task or a reference to an external task model.

Task ID Description Optional Shareable

Condition

User Interface Sub-Task

Service

Figure 5.2: Syntax diagram for specifying a task.

The syntax for defining a task is described in Figure 5.2. A task can consists of

zero or more Condition, Sub-Task, and Service elements which are mixed together

in any order to fulfill the task. The default value of the Optionality attribute is

false which indicates that the task by default is not optional. The user can only

skip a task if it is specified as optional. Finally, the default value of the Shareability

attribute is false which indicates that the task by default is not shareable (i.e., the

user cannot collaborate with other users to accomplishing this task; the user can

only share a task with other users if it is specified as shareable).

5.1.1.1 Service

Figure 5.3 describes the syntax for specifying a service that is required by a task.

A service can have an ID that can be used as a reference to the service from other

69

Page 87: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Service ID Service URI Argument

Figure 5.3: Syntax diagram for specifying a service.

parts of the task (e.g., if a service returns a value, a sub-task can take the value as

an input by referencing to that service by the service’s ID). The Service URI is an

URL with or without a static query string, and with or without a dynamic query

string which is represented by zero or more instances of the Argument concept.

An argument is a key-value pair.

5.1.1.2 Condition

Condition ID Expression Then Else

Figure 5.4: Syntax diagram for specifying a condition.

Figure 5.4 describes the syntax for specifying a condition in a task. A condition

can have an ID that can be used as a reference to the result of evaluating the con-

dition from other parts of the task (e.g., a service can use the result of a condition

evaluation as an argument by referencing to that condition using the condition’s

ID). The Expression concept is used represents a boolean expression which will be

evaluated by the task execution engine at runtime. If the evaluation of the expres-

sion returns true, the Then element will be executed. Otherwise, the Else element

will be executed. The syntax for specifying Then and Else elements is described in

Figure 5.5. Basically, a Then and Else element are a Service, a Condition, or a Task

element.

Task

Service

Condition

Figure 5.5: Syntax diagram for specifying Then and Else elements.

70

Page 88: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

5.1.1.3 Ontology of User Interface

The ontology of a user interface for a task is presented in Figure 5.6. Basic user

interface elements, which are supported by the current implementation, include

map, text, image, select, slider, list, input, and listener. Listener is a special type of

user interface elements. It is used for listening to any changes made by the user

on other user interface elements by the user and response by invoking a specific

service. For example, when the user changes the value of a slider on the user

interface of an “adjust TV volume” task, the slider’s listener will invoke a specific

service which then results in adjust the volume accordingly and immediately.

In future versions of the ontology, more user interface elements can be added

to the ontology. The syntax for defining a user interface for a task is described

in Figure 5.7. Specifically, to conform to the design guidelines of user interfaces

for mobile applications (our TASKUI application is deployed on mobile devices),

a user interface can only be either a map, a list, or a combination of the other types

of the user interface elements (i.e., Select, Text, Image, Slider, Input, and Listener).

Map Text Image

SelectUser

InterfaceInput

Slider List Listener

has

has

has

∗has

∗has

has

hashas

Figure 5.6: Ontology of User Interface.

Figure 5.7 describes the syntax for specifying the user interface for a task. There

is a loop indicating that there can be multiple mixed elements of Select, Text, Im-

age, Slider, Input, and Listener. However, Map and List should be presented indi-

vidually on different user interfaces requiring different sub-tasks.

71

Page 89: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Map

List

Select

Text

Slider

Image

Input

Listener

Figure 5.7: Syntax diagram of a user interface for a task.

Map: The Map concept is used to present a map given a postal address or a geolo-

cation (e.g., latitude and longitude). It is also used for a user to pick a place

on the map. The selected place can be then used as an input for a task or an

argument for a service.

Name Centre POI Zoom

Figure 5.8: Syntax diagram for specifying a map.

Figure 5.8 describes the syntax for specifying a Map user interface. The Name

attribute is used for later references back to a Map. The Centre specifies the

centre point of the map when it is initially displayed to the user. A Map can

display with a set of zero or more point of interests (called POI). The Zoom

specifies the initial zoom level of the map. A centre point an a POI can be an

address or a pair of latitude and longitude values as described in Figure 5.9.

Latitude Longitude

Address

Figure 5.9: Syntax diagram for specifying a centre or a POI on a map.

Text: Text is used to present textual instructions for a user to accomplishing a task.

It can also be used as prompts for inputs.

72

Page 90: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Image: Image is used to present graphical instructions.

Select: Select is used to create an option list (e.g., multiple selections and a single

selection). Figure 5.10 describes the syntax for specifying a Select element.

The Name attribute is used for later references to the Select element. If the

Multiple attribute is provided, the user can select multiple options at once.

An Option element is a triple of a description, a value, and a optional Se-

lected attribute.

Name Multiple Option

Figure 5.10: Syntax diagram of a Select element.

Listener: Listener is used to listen for changes (e.g., changes of values) the user

made on other user interface elements (e.g., a Slider or a List) and invoke

corresponding services in the background. A Listener element is invisible

for the user. Figure 5.11 describes the syntax for specifying an Listener ele-

ment. The URI is an URL that will be called with the Argument when the

listener get notified of changes. The Argument is referenced to the value of

the corresponding input element (e.g., a List or a Slider) which this listener is

associated with.

Name URI Argument

Figure 5.11: Syntax diagram for specifying a Listener.

List: List is used to create a list of items. Figure 5.12 describes the syntax for spec-

ifying an List element. A List may be specified with a default value, at least

one option, and can be associated with a Listener referenced by the Listener’s

name. The associated Listener will be notified to execute once the user selects

an item on the list.

Slider: Slider is used to create a slider (or a seeker) that allows a user to select a

value for selection of a value from a range of values. Figure 5.13 describes

73

Page 91: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Name Default value Option Listener

Figure 5.12: Syntax diagram of a list.

the syntax for specifying an Slider element.

Name Default value Min Max

Step Listener

Figure 5.13: Syntax diagram of a Slider.

Input: Input is used to create an input field where the user can enter data. An in-

put field can vary in many ways, depending on its data type attribute. Cur-

rently, the data types of an input field include “barcode”, “address”, “date-

time”, “phonenumber”, “string”, “boolean”, “integer”, “double”, and “file”.

Future implementations of the framework can add other types of data. Fig-

ure 5.14 describes the syntax for specifying an Input element.

Name Data type Default value

Figure 5.14: Syntax diagram of an input.

5.1.2 Task Specification Language

We specify task models in XML documents which are called task model specifications

(or task specifications for short). Note that, there are tasks which represent for the

same task goal but their specifications may be different (e.g., for different smart

environments). At runtime, when the user starts a task, TASKOS will load and

interpret the corresponding task specification, and generate step-by-step guidance

for the user to follow in accomplishing the task.

We’ve designed a basic RNC1 schema for specifying task models2. Figure 5.15

1http://relaxng.org/2The schema and task specifications can be found at https://github.com/ccvo/taskcom/

74

Page 92: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

shows the current version of this schema. Figure 5.16 shows a specification of

an “adjust TV volume” task and its corresponding user interface generated by

TASKUI on an Android mobile device.

default namespace = "http://homepage.cs.latrobe.edu.au/ccvo/task"

namespace xsd = "http://www.w3.org/2001/XMLSchema"

namespace a = "http://relaxng.org/ns/compatibility/annotations/1.0"

start = Task | IncludedTask | Service | If

Task = element task {attribute id { xsd:ID }?,[ a:defaultValue="false" ] attribute optional { xsd:boolean }?,[ a:defaultValue="false" ] attribute shareable { xsd:boolean }?,Title?,UI,((Task | IncludedTask | Service | If)*)*

}

Title = element title { text }

UI = element ui { ( ( MapView | TextView | Image| Select |Input | Slider | Listview | Listener )* )* }

TextView = element textview { attribute text { text } }

Image = element img { attribute url { xsd:anyURI } }

Select = element select { attribute name { xsd:QName },attribute value { text }?, Option* }

Input = element input { attribute type { DataType }, attribute name { xsd:QName },attribute value { text }? }

Option = element option { attribute value { text }, attribute text { text } }

MapView = element mapview { attribute address { text }? }

Slider = element slider {attribute name { xsd:QName }, attribute value { xsd:integer }?,attribute min { xsd:integer }, attribute max { xsd:integer },attribute step { xsd:integer }, attribute listener { xsd:QName }? }

Listview = element listview { attribute name { xsd:QName }, attribute value { text }?,attribute listener { xsd:QName }?, Option* }

Listener = element listener {attribute id {xsd:ID}?,attribute url {xsd:anyURI}, Arguments? }

Service = element service { attribute id {xsd:ID}?,attribute url {xsd:anyURI}?, Arguments? }

Arguments = element args { Argument* }

Argument = element arg { attribute name { xsd:QName }?, attribute value { text } }

If = element if { attribute id {xsd:ID }?, attribute condition { text }, Then, Else? }

Then = element then { (Task | IncludedTask)+ }*

Else = element otherwise { (Task | IncludedTask)+ }*

IncludedTask = element include {attribute id { xsd:ID }?, attribute taskid { xsd:ID } }

DataType = "barcode" | "address" | "datetime" | "phonenumber" |"string" | "boolean" | "int" | "double" | "file"

Figure 5.15: The schema of TASKCOM’s language.

75

Page 93: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

<task xmlns="http://homepage.cs.latrobe.edu.au/ccvo/task"id="set_tv_volume">

<title>Set TV Volume</title><ui><textview text="Let’s set the TV’s volume:"/><slider name="volume" value="70" min="0" max="100"

step="5" listener="volume_listener"/><listener id="volume_listener"

url="http://<ipaddress>/taskos/setTVVolume"><args><arg name="tvid" value="1"/><arg name="volume" value="$volume"/>

</args></listener>

</ui></task>

Figure 5.16: A specification of an “adjust TV volume” task and its correspondinguser interface generated by TASKUI on an Android mobile device.

5.1.2.1 Task Decomposition Choice

We use a Condition element to specify a decomposition choice in a task. A com-

position choice lets a task to be decomposed based on a condition at runtime.

We specify a Condition element as an If-Then-Otherwise statement in our

language. As shown in Figure 5.17, a composition choice is represented by an

If-Then-Otherwise element which consists of a conditional expression. This

expression will be evaluated at runtime by the task engine in order to determine

an appropriate sub-task to be executed next. The “make cappuccino” task and

the “make espresso” task are referenced within this specification by the use of

include elements.

5.1.2.2 Task Refinement

Table 5.1 illustrates a scenario where the task specifications contain a service call

that returns another task specification which is more specific than the previous

task. In this scenario, a user wants to control a device remotely. He starts the “con-

trol device” task which is very simple and general; its specifications has only one

service with no user interface. The execution of the “control device” task directly

leads to invocation of the “get locations” service, which in turn returns the specifi-

cation of a task called “select location”. The “select location” task asks the user to

76

Page 94: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

<task xmlns="http://homepage.cs.latrobe.edu.au/ccvo/task"id="make_coffee" shareable="true">

<title>Make Coffee

</title><ui><textview text="What coffee to make?"/><select name="coffee_type"><option value="0" text="Cappuccino" selected/><option value="1" text="Espresso"/>

</select></ui><if condition="$make_coffee.coffee_type == 0"><then><include taskid="make_cappuccino"/>

</then><otherwise><include taskid="make_espresso"/>

</otherwise></if>

</task>

Figure 5.17: A specification for a “make coffee” task which shows how to specifya decomposition choice and references to external tasks.

select a location from a location list. After the user selects a location, the runtime

engine will invoke the “get devices” service which is specified in the “select loca-

tion” task. The invocation of the “get devices” service returns a specification of

another task called “select device” that presents a list of devices in the previously

selected location. Once the user selects a device from the device list, the runtime

engine will invoke the “get commands” service which is specified in the “select

device” task. This service returns a “select command” task which allows the user

to select a command for that device (e.g., TV). Once the user selects a command

(e.g., “Set TV channel” in this example), the runtime engine will invoke the ser-

vice (e.g., the “controlTV” service in this example) which is specified in the “select

command” task. The “controlTV” service returns a “Set TV channel” task which

presents a list of channels for the user to select.

<service xmlns="http://homepage.cs.latrobe.edu.au/ccvo/task"id="control_devices"url="http://<ipaddress>/taskos/get_locations"/>

[no user interface]

77

Page 95: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

<task id="select_location"><title>Select location</title><ui><textview text="Please select a location:"/><select name="location"><option value="0" text="Bob’s house"/><option value="1" text="Room BG 212"/><option value="2" text="Staff Common Room"/>

</select></ui><service url="http://<ipaddress>/taskos/get_devices"><args><arg name="locationid" value="$location"/>

</args></service>

</task>

<task id="select_device"><title>Select device</title><ui><textview text="Please select a device:"/><select name="device"><option value="0" text="TV"/><option value="1" text="Lights"/><option value="2" text="Coffee Machine"/>

</select></ui><service url="http://localhost:8084/taskos/get_commands"><args><arg name="deviceid" value="$device"/>

</args></service>

</task>

<task id="select_command"><title>Select command</title><ui><textview text="What would you like to do?"/><select name="command"><option value="0" text="Set TV channel"/><option value="1" text="Set TV volume"/>

</select></ui><service url="http://<ipaddress>/taskos/controlTV"><args><arg name="cmd" value="$command"/>

</args></service>

</task>

Table 5.1: Examples of task refinement.

Note that, as shown in our schema, a service, a task reference, and a decom-

position choice are one of many possible instantiations of a task, i.e., we have

a notion of polymorphic tasks. Hence a service call can also return a task refer-

ence (i.e., an include element), a service, or a decomposition choice (i.e., an

If-Then-Otherwise element).

78

Page 96: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

5.1.2.3 Multiple-Device Tasks

A task may involve controlling one or more devices as the same time. For exam-

ple, the task of adjusting a room’s brightness can involve the adjustment of the

ceiling light and the window drape. With the application-oriented paradigm (i.e.,

each device is controlling via an application), the user would need to switch be-

tween two different applications (i.e., one for controlling the light and the other

for controlling the window drape) for accomplishing this task. This does not a

good user experience. With the task-oriented paradigm, devices are controlled by

calling services. Our task specification language allows specifying multiple ser-

vices for a task, each of the which is associated with a listener on a user interface.

By using multiple listeners on one user interface, we allow the user to controlling

multiple devices on the same user interface screen (i.e., no need to switch between

different screens).

<task id="adjust_room_brightness" shareable="true"><title>Change the room’s brightness</title><ui><textview text="Let’s adjust room’s brightness"/><textview text="Adjust the light:"/><slider name="lightlevel" value="70" min="0" max="100"

step="5" listener="light_listener"/><listener id="light_listener"url="http://<ipaddress>/taskos/setLightLevel"><args><arg name="lightid" value="1"/><arg name="level" value="$lightlevel"/></args>

</listener><textview text="Adjust the window drape:"/><slider name="windowlevel" value="30" min="0" max="100"

step="5" listener="window_listener"/><listener id="window_listener"url="http://<ipaddress>/taskos/setWindowLevel"><args><arg name="windowid" value="1"/><arg name="level" value="$windowlevel"/></args>

</listener></ui>

</task>

Figure 5.18: A specification of an adjusting room’s brightness task and its rendereduser interface.

Figure 5.18 presents a task specification for the adjusting a room’s brightness

aforementioned and its user interface rendered on an Android mobile device. On

this user interface, the user is able to adjust the light and the window drape on

one screen. The services will be called automatically every time the user change

the value of the sliders. This is achieved by using two listeners, each of which

listens to each of the sliders’ changes and activated accordingly.

79

Page 97: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

5.2 Task Guidance

Task guidance aids the user in accomplishing a task. It includes instructions,

prompts for inputs, and presentation of information. It also includes navigation

means for the user to navigate backward and forward through the steps of the

task. Based on the specification of a task, the guidance for accomplishing the task

is generated by the task execution engine and then rendered on the user interface

by TASKUI.

5.2.1 User Stories

Figure 5.19 shows the guidance for borrowing a book from a library. In this task,

Step 1 is an introduction step that tells and confirms the user about the task he/she

is going to accomplish. Step 2 and 5 are manual steps which are achieved by the

user. These two steps are optional which the user can skip (e.g., the user is already

at the library so he/she can skip the step 2; future implementations of the system

can detect the user’s location to skip such the step automatically). Step 3, 6, and 7

require inputs from the user and some computations (e.g., validating the user and

the book) done by the back-end system. Finally, Step 4 and 8 are to present the user

information and confirmation.

Figure 5.20 shows the guidance for another task called “make cappuccino”.

This is to show different types of user interfaces specified using our task specifica-

tion language. Each of the user interface screen correspond to a task or a sub-task

in the task specification.

5.2.2 Rendering Task Guidance on TASKUI’s User Interface

A task specification encapsulates information of the task’s decomposition (e.g., se-

quence of steps) and the task’s user interface (i.e., the task guidance). When the

task server system process a (sub-)task, it sends the information of the (sub-)task’s

user interface to TASKUI for rendering the graphical user interface on the user’s

80

Page 98: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

(Step 1) (Step 2) (Step 3) (Step 4)

(Step 5) (Step 6) (Step 7) (Step 8)

Figure 5.19: An example of task guidance: Steps to borrow a book from a libraryusing the smartphone’s camera to scan and detect the user card’s barcode and thebook’s barcode.

Figure 5.20: Task guidance for making cappuccino.

mobile device. We implement TASKUI in a way that can re-use native user in-

terface elements provided by the mobile platform and customised user interface

elements which are provided by third-party applications.

81

Page 99: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

<message><UItoken>bc6995f1-62f2-493f-8879-14b82ff46222</UItoken><title>Borrow book from library</title><ui><textview text="Please scan barcode of your library card."/><input type="barcode" name="user_barcode"/>

</ui><navigationbar><next>true</next><skip>false</skip><share>true</share><back>true</back>

</navigationbar></message>

Figure 5.21: An example of a message containing a task’s user interface.

Figure 5.21 shows an example message sent from the server to TASKUI for ren-

dering a user interface asking the user to scan the barcode of his/her library card

using the camera of a mobile device. The message for rendering a guidance of

a (sub-)task contains four different elements: a user interface token (UItoken),

a title of the current step (title), a user interface (ui), and a navigation bar

(navigationbar).

• UItoken: TASKUI uses this information to determine if the current user in-

terface it is showing is stale or not. When multiple users collaborating on

the same task, TASKUI on all the users’s mobile devices will automatically

updates the user interface every time one of the users manipulates the task.

• title: It is used as the title for the current screen.

• ui: This element is used to create user interface elements on TASKUI’s screen.

This information is a copy of the ui element in the task specification.

• navigationbar: This element lets TASKUI create the navigation bar on

screen. This element is generated dynamically by the task server based on the

status of the task. The navigation bar includes four buttons which allows the

user to navigate through the task. The user can only execute the next sub-task

(i.e., the Next button is enabled) if there exists a next sub-task; the user can

only skip, share the current sub-task if it is optional, sharable respectively;

and the user can only go back to the previous sub-task if there exists a sub-

task before the current sub-task. The task server retains all the previously

82

Page 100: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

executed sub-tasks of an ongoing task are stored in a special stack called the

back stack. In our current implementation, the back mechanism is a way for

the user to go back and re-execute a previous sub-task. However, it does

not undo the impact which has been caused by the execution of that sub-

task previously. To support the undo feature to some extent, the developer

would need to define the “undo task” which normally reverses the impact

or compensates for the impact caused by the execution of the corresponding

task.

In the above example, the ui element includes a barcode input which is rep-

resented on the screen as a text view and a “Tap to scan” button. Tapping on the

button will invoke a third-party barcode scanner component which finally returns

the scanned barcode back to TASKUI. This example shows how TASKUI can seam-

lessly integrate third-party services/components for best user experience during

accomplishing a task.

5.3 Distributed Collaboration

Our framework allows distributed users to collaborate on accomplishing a sharable

task (i.e., the “sharable” attribute is specified “true” in the task specification).

While a task is being accomplished, all the users who are collaborating on the task

can observe and manipulate the task on their local user interfaces.

5.3.1 User Story

Figure 5.22 illustrates a scenario where a user (Tom) wants to borrow a book re-

motely using TASKOS with the help from his colleague (David), who is currently

located in the library. After having searched and located the book (Figure 5.22.a),

Tom shares the task to David by clicking on the “Share” button (TASKUI en-

ables this button because the current task is sharable) and provides David’s user-

name and a message (Figure 5.22.b). The message tells David to take the book

83

Page 101: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

from the shelf and scan the book’s barcode using his smartphone’s camera from

within TASKUI on his smartphone. Once David has received the notification of

this shared task (Figure 5.22.c and 5.22.d), he can help Tom to continue the task as

mentioned in the accompanied message. While David is continuing the task, Tom

can also observe the progress synchronously.

(a) Tom’s TASKUI (b) Tom’s TASKUI (c) David’s TASKUI (d) David’s TASKUI

Figure 5.22: An example of collaborating a task by two users.

5.3.2 Communication for Sharing a Task

When a user wants to collaborate a task with another user, he/she asks TASKUI

to send a request for task collaboration (see Figure 5.22.b). A request of task col-

laboration consists of a sharer’s ID, a sharee’s ID, a task ID, and a message. When

the task server receives a request of task collaboration, it forwards the request to

the sharee’s TASKUI. TASKUI on the sharee’s mobile device notifies the sharee of

the request. If the sharee accepts the request, he/she can start to accomplish the

shared task.

Figure 5.23 is a sequence diagram that shows the interactions between two dif-

ferent TASKUI instances and a task server arranged in time sequence when a user

requests for task collaboration. It also shows the sequence of messages exchanged

between the entities needed in this scenario.

84

Page 102: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

:Tom’s TASKUI :Task Server :David’s TASKUI

time

Request collaboration

Error

[sharee not found]

ACK

[sharee found]

Forward request

[sharee found]

Reject

[request not accepted]Request not accepted

Accept

[request accepted]Request accepted

Continue task

Update TASKUI’s screenUpdate TASKUI’s screen

Figure 5.23: Sequence diagram: Request for task collaboration.

5.4 Context-aware Task Suggestion

There can be hundreds of supported tasks for a particular user at a particular point

of time in a smart environment. Task suggestion [80] is a mechanism to help users

quickly select their intended tasks out of the massive number of supported tasks

available to them. The general idea is to suggest a user for relevant tasks based

on the user’s context. The suggestion process can be triggered by a change in the

user’s context (e.g., a change of the user’s location) or an event (e.g., the user points

his/her mobile device at an object like a TV). In our current implementation, we

use location information (e.g., using GPS and Bluetooth technologies) and point-

ing gesture (e.g., using Cricket[81] and magnetic compass technologies) to trigger

the task suggestion. Other object recognition and location tracking technologies

such as vision-based object recognition [82], bar/QRcode reader [83], RFID [84],

and NFC3 can also be easily integrated into our system to trigger the task sugges-

tion.

3http://www.nfc-forum.org/resources/white papers/nfc forum marketing white paper.pdf

85

Page 103: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

5.4.1 Place-Based Tasks

A task is usually associated with a place (e.g., to fulfill the purpose of the place).

Therefore, a place can determine a list of primary tasks. Based on this assump-

tion, we define a place-based task space which is used for suggesting tasks in our

framework. Figure 5.24 illustrates an example of place-based tasks for a univer-

sity. Within the ambience of a university, there are sub-places such as car park,

staff common room, seminar room, library, and bookshop. A sub-place may fur-

ther contain smaller places, for example, a library has checkout desks, study car-

rels, and discussion rooms. Each place has tasks associated with it. For example,

tasks a user may perform in a library are ‘Locating a book’, ‘Searching for a book’,

‘Borrowing a book’, and ‘Booking a study carrel’.

Figure 5.24: Examples of associations between tasks and spaces.

A user’s location is an important context used in many context-aware sys-

tems [13, 14]. The idea of task suggestion is based on the concept of location-aware

computing [85] and the pointing-controlling metaphor. An example of location-

aware systems is the Stick-e document [86] which can bring up a control panel on

a PDA when the user is close to a copier.

86

Page 104: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

For example, when the user enters the car, he/she will be suggested with

car-supported tasks such as playing music or finding a direction. Figure 5.25.a

and 5.25.b show different task suggestions in different smart environments (by ei-

ther automatically or manually).

(a) (b)

Figure 5.25: Examples of space-based task suggestions.

The user can also explicitly query for available tasks which are supported by a

particular object (e.g., a coffee shop or a TV). For example, when the user points4

the mediate device (e.g., a smartphone) to a TV, the system suggests him/her with

the TV-supported tasks such as watching TV, changing channel, or adjusting vol-

ume. Figure 5.26 illustrates different task suggestions when the user points the

mediate device to a TV and an air-conditioner in his/her personal office.

Figure 5.26: Pointing & Tasking metaphor. Different task suggestions when theuser points to a printer, a television, or an air-conditioner.

TASKOS also maintains a list of tasks which are possible in the current smart

environment. The task list reflects the current capability of the smart environment.

To improve the usefulness of the task list, we provide a keyword-search feature

that allows the user to search for a particular task.

4Where pointing could mean the use of compass and location recognition, image-based objectrecognition, RFID, NFC, BlueTooth, bar/qrcode, Ultrasound, or Infrared technologies.

87

Page 105: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

5.4.2 Hierarchical Model of Smart Environments

We hierarchically present smart environments which are hierarchically related to

each others. For example, the university has a library and faculties; the library has

a TV and bookshelves. Each smart environment has its purpose and is supposed

to support its relevant tasks. For example, a university library supports a task

for borrowing books. We associate each task with one or more supporting smart

environments. For example, we associate the “Watch TV” task with the TV and

the “Borrow book” task with the university library. If the TV is physically located

within the library, then the “Watch TV” task is also automatically associated with

the library.

In TASKCOM, each task is pre-associated with an smart environment (some-

times we call them spaces for short). A space can be a university, a library, a room,

a car, or even a particular appliance, a device, or a physical object. A space may in-

clude other sub-spaces creating a space tree. Each space may have a delegated task

server which handles all tasks which are supposed to be supported by that space

and its sub-spaces. Figure 5.27 represents a space tree called “The University” and

the associated tasks. The more specific a space is, the more relevant tasks are sug-

gested. Note that, a task which is supported by a nested space may be irrelevant

with the outer space. For example, the task ‘turning of the light in Bob’s room’

may not relevant when Bob is located outside the room. Therefore, we model re-

lated smart environments as a hierarchical structure in which an environment at a

larger scale can consist of sub-environments at smaller scales.

Formally, a space tree is represented as a pair (V,E), where V is the set of nodes

denoting spaces and E is the set of edges denoting hierarchical relations between

the spaces. Denote the set of tasks from space v by Tv, if the current space of the

user is v in V and v1, v2, . . . , vn(n > 0) are the spaces (including v and the root

space) along the path from the current space to the root, then the set of suggested

tasks (T r) is the union of the sets of tasks which are associated with the spaces

88

Page 106: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Space: The UniversityTask Server: LTU-TSTasks:1. Find room2. View map

Space: Room PS1 219Task Server: nullTasks:1. Switch lights2. Print document

Space: TelevisionTask Server: nullTasks:1. Set volume2. Change channel

Space: HeaterTask Server: nullTasks:1. Set temprature2. Set fan

Space: The LibaryTask Server: LIB-TSTasks:1. Borrow book2. Return book

Figure 5.27: An example of task and space associations: A “University” space andits associated tasks.

along that path:

T r =

n⋃

1

Tvi.

5.4.3 Task Suggestion Algorithm

Algorithm 5.1 Task suggestion: It is triggered by changes of the user’s context.

Require: The user’s context.Ensure: Suggested tasks.

1: v← context.getCurrentSpace();2: S ← getContainedSpaces(v);3: T r ← φ;4: for all s ∈ S do5: Ts ← getAssociatedTasks(s);6: T r ← T r ∪ Ts;7: end for8: T r ← sort(T r);9: return T r;

The task engine continuously listens for changes of the user’s context and trig-

gers the task suggestion algorithm (see Algorithm 5.1). Based on the context of the

user (e.g., the location where the user is currently located or the object the user is

pointing to), the system determines the current taskable space and its contained

89

Page 107: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

spaces and then queries the task database for the tasks which are associated with

those spaces. These tasks are then ordered based on the granularity levels of their

associating spaces within the space tree. Tasks associated with a space with finer

granularity (i.e., the smaller space enclosing the user) are likely to be placed on the

top of the suggestions. For example, let’s assume that Bob’s TASKUI is currently

connected to two task servers: “LTU-TS” for the University and “LIB-TS” for the

Library spaces respectively. The space tree is set up as shown in Figure 5.27. When

Bob is moving to a new space or pointing to an object, his current taskable space

also changes. As result, the suggested tasks for him are also changed. Table 5.2

shows different suggested tasks for Bob when his context changes.

Context Sorted suggestions of Tasks

Bob enters the University1. Find room2. View map

Bob enters the Library

1. Borrow book2. Return book3. Find room4. View map

Bob enters the room PS1 219

1. Switch lights2. Print document3. Find room4. View map

Bob points to the television

1. Set volume2. Change channel3. Switch lights4. Print document5. Find room6. View map

Table 5.2: Different suggestions of tasks in different contexts.

5.4.4 Put Users in Control

The system allows the user to turn off the automatic task suggestion, so that the

user can switch between task spaces manually to view available tasks and to con-

trol a particular smart environment remotely. This is useful, e.g., when the user

wants to control a particular task space remotely. For example, Bob is currently at

the University and he wants to turn on the living room’s heater and set it to his

preferred temperature before he gets home. Figure 5.28 shows the settings for the

90

Page 108: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

automatic task suggestion and the dialog for switching between smart environ-

ments.

Figure 5.28: Settings for automatic task suggestion and manually switching be-tween smart environments.

5.4.5 An Implementation of Task Suggestion

The users’ context that we currently employ to demonstrate the task suggestion

feature is users’ locations. TO achieve this, we use four different technologies such

as GPS, Bluetooth, RFID [84], and Cricket5. However, we do not limit our system

to any future positioning technologies. Specifically, we use the GPS technology to

infer outdoor locations (i.e., campus and building levels) because of its inaccuracy

for indoors. We set an accuracy threshold to 20 metres. If the location accuracy re-

ported by the GPS unit is greater than this threshold, the campus level will be used

to determine the current space where the user is located; otherwise the building

level will be used.

The Bluetooth and RFID technology is mostly used to infer indoor locations

(e.g., buildings, rooms, and objects). TASKUI scans for Bluetooth devices in sur-

roundings and reports the scanning results to the task suggestion engine. By com-

paring the discovered Bluetooth devices with the registered Bluetooth addresses

in the database, the location of the user can be determined which will then trigger

the task suggestion process.

5http://cricket.csail.mit.edu/

91

Page 109: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Similarly, RFID readers (e.g., 13.56Mhz Mifare readers) can be placed at various

locations and connected to the task suggestion engine. When a user wearing an

RFID tag enters into the active zones of an RFID reader, this RFID reader will read

the tag’s ID and send it to the task suggestion engine. By comparing this ID again

the database of IDs, the task suggestion engine can determine the current location

of the user.

We have earlier experimented this concept for automatically turning on/off a

light an a fan when the user enters and leaves his/her personal office. For this

experiment, the controlling of the light and the fan is implemented using an X10-

based HomeAutomation Kit6. However, any other remote controlling technologies

are also possible to be incorporated into our system as soon as the services for

controlling the devices are provided.

For the purpose of experiment, and with the available facilities we have, we use

the Cricket system to implement and demonstrate the pointing metaphor. Cricket

uses time-difference-of-arrival between radio frequency and ultrasound to obtain

distance estimates. The accuracy can be between 1cm and 3cm. Cricket consists of

beacons and listeners. A beacon can be worn by the user or attached to the user

personal device (e.g., a mobile phone) that is running TASKUI. Each listener is

attached to a device/appliance in the space and connected to the task suggestion

engine. Cricket is able to provide listener identifiers and their distances to each

beacon within their radio ranges at every moment. By comparing these distances,

the task suggestion engine can determine the device/appliance which is currently

nearest to the beacon (hence nearest to the user). Then the tasks supported by this

device/appliance will be suggested.

To recognise the device/appliace the user’s pointing at, we use the line-of-sight

connectivity between listeners and beacons of the Cricket system. Theoretically,

if a listener and a beacon are facing each other, the rate of receiving ultrasound

signals at the listener is highest in comparison with a beacon facing a listener at

some angles. We have set an experiment where a beacon was in turn placed at four

6http://www.x10.com

92

Page 110: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Figure 5.29: Experiment of the pointing metaphor using the Cricket system.

different angles facing a listener from a two metre-distance including direct facing,

450 angle, right angle, and opposite (see Figure 5.29). We recorded statistically

four averaged rates of receiving signals at the listener in the four corresponding

positions for ten seconds each. The result proves that if a listener and a beacon are

facing each other, the rate of receiving ultrasound signals at the listener is highest

in comparison with a beacon facing a listener at some angles. In other words,

the device/appliance, which is receiving the highest rate of ultrasound signals, is

the once that the user is currently pointing at. Based on the measurement of the

rate of receiving ultrasound signals, the task suggestion engine is able to infer the

device/appliance the user currently points at, then suggest tasks for this device.

5.5 Capturing Users’ Intended Tasks

One of the desired features of a task-oriented system is its capability of captur-

ing users’s intended tasks. In other words, this is an ability of the system that

allows the user to quickly express the goal he/she wants accomplished. In the ex-

isting systems, the users can express their goals of tasks using command phrases

93

Page 111: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

(e.g., InterPlay [58]) or demonstrations (e.g., CAPpella [18]). The command phrase

approaches allow users to express their intended tasks in natural language. A

command phrase normally consists of a verb (e.g., ‘turn off’), object(s) (e.g., ‘the

light’), and adverb(s) (e.g., time and location like ‘in the living room’). For exam-

ple, “Turn off the light in the living room” is a command phrase in which “turn

off” is the verb, “light” is the object, and “living room” is the adverb. The com-

mand phrase-based method relies heavily on natural language parsers (e.g., the

Berkeley Parser7). With the demonstration approaches, the user first trains the

system a task they wish to repeat in the future by doing demonstrations of how

that task is accomplished. After that, the system continuously observes the user’s

actions to recognise the tasks which the system has learned previously. If the sys-

tem can recognise a task, it will help the user in accomplishing that task. This

method requires the environment fitted with sensors (e.g., cameras in the case of

CAPpella) which can continuously capture the user’s actions. The system then

analyses the captured data online and tries to infer the task which the user is ac-

complishing. This approach heavily relies on pattern recognition techniques and

machine learning.

5.5.1 Keyword Search for Supported Tasks

In TASKCOM, instead of allowing the users to express their tasks via a natural

language or demonstrations, our approach uses a keyword search method which

allows the users to express their intended goals using keywords to search for a

supported task that matches the provided keywords. To achieve this, each task

model in TASKCOM is provided with a set of keywords. The system then matches

the user-provided keywords with the task’s title, the task’s description, and the

task’s keywords. All matched tasks are ranked based on the number of matched

keywords of each task (i.e., the greater number of matched matches, the higher the

rank in the result list).

With the current implementation of TASKCOM, a user can express keywords

7http://code.google.com/p/berkeleyparser/

94

Page 112: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

to search for supported tasks by either typing via a keyboard, hand-drawing via a

touchable screen, or telling the keywords via a micro-phone. Figure 5.30 shows the

current implementation of the search screens which are implemented for the An-

droid platform. We are currently using built-in speech recognition APIs provided

by the mobile platform for speech recognition. For handwriting recognition, we

integrate with our framework a tool called Gesture Search [87].

(a) Initial suggestions (b) Search by typing (c) Search by voices (d) Search by gestures

Figure 5.30: Using keyword search to find supported tasks.

The approaches such as in InterPlay [58] and CAPpella [18] can be integrated

into TASKCOM to provide a better user experience.

5.6 Task Repository Management

We developed a web-based prototype application (a sub-component of TASKOS)

for managing the repository of task specifications for a smart environment or a

group of related smart environments. Managing the task repository includes val-

idation, addition, modification, and deletion of task specifications. Figure 5.31 is

the main page showing information of task specifications supported by the asso-

ciated smart environments.

Figure 5.32 is the screen where a task can be added into the repository. The

metadata of a task includes a title, a description, and an environment’s ID. The

XML specification file of a task will be validated against the language’s schema. If

a task specification has an error (e.g., not conforming to the language’s schema),

95

Page 113: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Figure 5.31: Management of the task repository.

the system will report the error and provides instruction for fixing the error (see

Figure 5.33, for example). We currently use the ThaiOpenSource RNG Validation

package (Jing API8) for validating task specifications because it is a Java-based and

light-weight package which is integrated with our system effectively.

5.7 Remarks

5.7.1 Why Centralised Model of TASKCOM?

The greater advantages of centralise computing model compared to decentralised

one are unrefutable. The problems and limitations of the centralised model such as

extendibility, scalability, and bottleneck are successfully addressed benefited from

rapid improvements of processing power, storage capacity, and network band-

width and reliability. We do not argue that the decentralised model is completely

failed. This model should be suitable in infrastructure-less environments such

8http://www.thaiopensource.com/relaxng/jing.html

96

Page 114: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Figure 5.32: Adding a new task to the task repository.

Figure 5.33: Validating a task specification.

as disasters, open seas, deep forests, or between cars on roads. Even in such

these cases, we still have infrastructures located elsewhere for a centralised model

thanks to such as satellite networks and WiMAX (reference here).

97

Page 115: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

5.7.2 Technologies Used

We deploy our framework on the client-server model. All source codes are written

in the Java language and html. The server side system uses Web service technolo-

gies for service invocation. The TASKUI client is implemented using the Android

platform SDK.

98

Page 116: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Chapter 6Evaluation and Discussion

In this chapter, we present the evaluation of TASKCOM. To evaluate all different

aspects of the framework, we conducted three evaluation methods:

Comparison TASKCOM with existing approaches We compare TASKCOM with

existing task-oriented frameworks in terms of the desired features provided.

The aim is to highlight the advantages and limitations of the compared ap-

proaches. The result shows that our framework provides more features than

the existing frameworks. Moreover, TASKCOM provides two extra features

(task collaboration and distributed observation) which have not been meet

by the existing frameworks.

Experiment and post-experiment questionnaire To evaluate the efficiency, the ef-

fectiveness, and the usability of our system, we conducted an experiment

in which the participants were asked to accomplish a set of specified tasks

under two conditions: with and without using our system. We captured

the participants’ activities during accomplishing the tasks. We also asked

the participants to answer the post-experiment questionnaire. The experi-

ment results and the responses to the questionnaire show that the partici-

pants accomplished the tasks more efficiently and effectively with our sys-

tem than without our system, and that our system was easy to use, required

less knowledge to operate, and satisfied the participants.

User survey To evaluate the context-aware task suggestion mechanism of our sys-

99

Page 117: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

tem, we conducted a user survey in which we compared the tasks which

were suggested and ordered by the participants with the tasks which were

suggested and ordered by our system in specific contexts. The results show

that the proposed context-aware task suggestion mechanism was significantly

effective and efficient. The tasks suggested by the system meet more than

50% of the tasks suggested by the participants while the order of tasks sug-

gested by the system matched with about 80% of the task order suggested by

the participants.

In this chapter, we describe our evaluation of TASKCOM. First, we compare

TASKCOM with several existing approaches in terms of limitations and advan-

tages of these approaches. Second, we evaluate TASKCOM by putting it in use

cases and evaluate how it can helps users in comparing with the same use cases

but without using TASKCOM. Finally, we conduct a user study where we invite

participants to use our system to accomplish several tasks and answer a post-

test questionnaire. We measure and analyse the participants’ performance and

responses.

6.1 Comparison With Existing Approaches

There are several existing task-oriented frameworks which also aimed to address

the usability problem of smart devices and environments. In this section, we

present the comparison between our proposed framework with representative

task-oriented frameworks. The aim is to highlight the advantages and limitations

of the compared approaches.

6.1.1 Compared Subjects

For this comparison, we choose several existing task-oriented frameworks which

are closely related to our framework. These frameworks have either the aim or the

features which are similar to ours. We compare ours with the following systems:

100

Page 118: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

DiamonHelp: The DiamondHelp system [54] provides a mixed-initiative inter-

face for a user to control devices. A conversation model between the user

and the interface includes a set of utterances such as “What next?”, “Never

mind”, “Oops”, “Done”, and “Help”. A conversational model can also in-

clude procedures that accomplish high-level tasks in terms of concrete device

operations.

Huddle: Huddle [60] aims to address the complexity of controlling a system of

multiple connected devices. It uses pre-defined content flows of a given task

to connect devices together for accomplishing the task. It then generates user

interfaces which allow the user to control the flows.

InterPlay: InterPlay [58] aims to ease users in using smart environments by pro-

viding a pseudo-English user interface which allows a user to express their

task such as “Play the Matrix movie on the TV in the living room”.

Roadie: The Roadie system [5] aims to provide step-by-step task guidance which

is automatically generated based on an EventNet database.

TaskCE: ANSI/CEA-2018 [6] uses task models at runtime to guide users accom-

plishing tasks. ANSI/CEA-2018 provides a standard language for describing

task models.

TCE: A task computing environment [24] allows users to accomplish a task by

either choosing a service from a list or composing a complex service using

available services.

6.1.2 Compared Aspects/Features

Because we concern about the usability of smart environments, we compare the

existing frameworks with TASKCOM in terms of the desired features they provide

for the end-users. These features have been reported in the publications of the

frameworks.

101

Page 119: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Task guidance: Provide users with step-by-step instructions for accomplishing

tasks.

Task suggestion: Suggest users with relevant tasks based on their context.

Task collaboration: Allow multiple users to collaborate on the same task distribu-

tively.

Multiple-device tasks: Allow users to execute tasks which invoke functions from

multiple devices and services.

Multiple-step tasks: Support executing tasks which may require multiple steps.

Multi-tasking: Support executing multiple tasks at the same time.

Task discovery: Provide mechanisms for the users to quickly discover supported

tasked in a particular smart environment.

Distributed observation: Allow distributed users to observe the accomplishment

of a task by other users.

Zero-configuration: Changing a smart environment in terms of functions and sup-

ported tasks should not require users’ intervention.

Remote controlling: Provide mechanisms which allow users to select and control

devices remotely.

Task composability: Provide mechanisms to assemble existing task models to cre-

ate new task models.

Dynamic decomposition: The decomposition of a task model is a process at run-

time that decomposes a task model into sub-task models. Dynamic decom-

position means the set of sub-task models which are resulted from a decom-

position step can be different at different time depending on some condi-

tions. The dynamic decomposition allows a task model to be changed at

runtime. For example, in TASKCOM, adding and removing a task model

102

Page 120: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

from the parent task model can be achieved as a result of a service invoca-

tion. Specifically, an invocation of a service can return a new task model

which can be tailored depending on the provided arguments. The returned

task model will be added to the decomposition tree of the parent task to re-

place that service.

Task migration: The ability of a system that allows migrating task instances across

different smart environments. That is, a user can suspend a task and then

resume it later in another smart environment by making use of available

services and devices in the local environment [88].

6.1.3 Results

Table 6.1 summarises the comparison among approaches based on the comparing

aspects fore-mentioned. An empty cell in the table indicates that the correspond-

ing framework does not provide that feature/aspect. A cell with a tick (X) indi-

cates that the corresponding framework provides that feature/aspect. And a cell

with an n/a indicates that we could not verify that whether the corresponding

framework provides that feature or not based on their publications.

Our approach (TASKCOM) meets all the compared aspects (because they are

our aims) while other approaches meet a sub-set of the aspects (this could be be-

cause their aims are different with each others and with our aims). There are some

aspects which are commonly meet by several approaches (their aims have some

overlaps). Note that, we say an approach meets a particular aspect if we find that

the authors reported it in their publications. We do not re-implement the com-

pared approaches to evaluate each of the compared aspects. There is a limitation

of this comparison method that it can only answer the yes/no questions but it

cannot answer the how questions (e.g., how effective and how efficient).

DiamonHelp Huddle InterPlay Roadie TaskCE TCE TaskCOM

Task guidance X X X X X

Task suggestion X X X

103

Page 121: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Task collaboration X

Multiple-device tasks X X X X X X

Multiple-step tasks X n/a X X X X

Multi-tasking X X X X

Task discovery X X X

Distributed observation X

Zero-configuration n/a n/a X X X X X

Remote controlling n/a X X X

Task composability X n/a X n/a X X X

Dynamic decomposition n/a X n/a X

Task migration

Table 6.1: Comparison between the existing frameworks and TASKCOM.

Notably, only TASKCOM supports distributed task collaboration and observa-

tion. This is because the existing frameworks were designed to work locally (i.e.,

there is no connection between system instances of the same framework which are

deployed in different smart environments). In contrast, TASKCOM stores task in-

stances globally so that they can be shared and accessed by different users from

distributed locations. The foundation for this centralised architecture is based on

an assumption that computer networks will soon be available everywhere at any-

time, especially in smart environments.

Another notable point is that although task migration is one of the ultimate

goals to achieving task-oriented systems [88], the compared frameworks do not

support task migration across smart environments. Although we aim to provide

this feature in TASKCOM, our current implementation does not enable this. The

current implementation of TASKCOM has no semantic modelling of services be-

cause we use static service bindings in task specifications. Our future work will

aim to address this by bindings abstract services in semantic task specification

and use ontology-based reasoning mechanisms to resolve the abstract services to

match with the local services.

104

Page 122: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

6.2 Use Cases

In this section, we evaluate TASKCOM by putting it in real-world use cases and

evaluate how it can helps users and developers to accomplish specific tasks. We

compare the user’s activities (including mental activities) which would be required

for accomplishing several tasks using TASKCOM with doing the same tasks with-

out TASKCOM (i.e., with the current function-centric paradigm). The aim of this

evaluation method is to verify if using TASKCOM can be more mentally effec-

tive and efficient to achieve certain tasks. In this section, we do not evaluate the

efficiency in terms of time (we will evaluate this in the next method). For this

comparison, let’s consider the following scenario:

Bob’s smart personal office at his University has computer-controllable lights. Let’s

assume that the University has a task server which currently supports a set of task model

specifications, one of which allows Bob to control the lights in his office. The University

may at anytime update these task model specifications (i.e., modify, add, or remove task

model specifications).

Tables 6.2 illustrates the comparison of the activities required for accomplishing

several tasks with and without using TASKCOM.

Situation With TASKCOM Without TASKCOM

Bob wants to

brighten the room.

Note that, only the

lights are remotely

controllable at this

point. The window

drapes cannot be

controlled remotely

yet.

1. Search for the task (e.g.,

tell the phrase “brighten

room”).

2. Select the “Adjust room

lighting” task from the

search result to start the

task. Note that Bob easily

recognises this task for his

goal because the descrip-

tion of this task includes

“brighten the room”.

3. Bob is presented a

user interface with a slider

(seeker) where he can ad-

just the lights.

1. Think of what appliances should

be used for this task: the lights.

2. Think of what applications are

used to control the lights: Smart Of-

fice Controller.

3. Think of how the icon of this ap-

plication looks like in order to map

the application to an icon.

4. Locate the icon from the applica-

tion icon list.

5. Select the icon to start the appli-

cation.

6. Bob is presented a user interface

with a slider where he can adjust

the lights.

105

Page 123: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Bob’s university

has updated the

“adjust room light-

ing” task model

which allows him

to control the win-

dow drapes and the

lights at the same

time for adjusting

the room lighting.

The university’s

also released a new

version of Smart

Office Controller for

the same purpose.

Bob’s task now is to

darken the room.

1. Search for the task (e.g.,

tell the phrase “darken

room”).

2. Select the “Adjust room

lighting” task from the

search result to start the

task.

3. Bob is presented a user

interface with two slid-

ers where he can adjusts

the lights and the window

drapes.

1. Update Smart Light Controller.

2. Learn what’s new in the new version: Bob

knows that the application now allows him to

control the window drapes as well.

These activities are usually needed for every up-

date.

——————————–

1. Think of what appliances should

be used to achieve the goal of

this task: the lights and window

drapes.

2. Think of what applications are

used to control the lights and win-

dow drapes: Smart Office Con-

troller.

3. Think of how this application’s

icon looks like to map the applica-

tion to an icon.

4. Locate the application icon from

the application icon list.

5. Select the icon to start it.

6. Bob is presented a user interface

with a slider where he can adjust

the lights and window drapes.

The university’s

added a “print

document” task

model for print-

ing documents via

printers across the

university. It’s also

released a Mobile

Printer application

for the same pur-

pose. Bob wishes to

print a document.

1. Search for the task (e.g.,

tell the phrase “print doc-

ument”).

2. Select the “Print docu-

ment” task from the search

result to start the task.

3. Go through several

steps to accomplish the

printing.

Simply just do not know that there

is an application for this task until

someone tells him about it.

Table 6.2: Comparing user’s activities with and without TASKCOM.

As can be seen in Tables 6.2, when accomplishing the same task, TASKCOM re-

quires less activities than without TASKCOM. Especially, TASKCOM can eliminate

106

Page 124: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

many mental activities which are normally required without TASKCOM.

6.3 An Experiment

To evaluate to what extent our system is easier, more efficient, and more effective

for users to interact with smart environments (i.e., use consumer electronic devices

and pervasive services to accomplish their everyday tasks in smart environments),

we invited a number of participants to play a role as end-users accomplishing

certain tasks with and without using our system. The overview of the experiment

procedure is:

• Participants were invited to accomplish a set of tasks in a simulated smart

environment with and without using our system. We video-taped during

the participants accomplishing the tasks for later analysis.

• The participants were then asked to answer a post-experiment questionnaire.

The questionnaire was designed to capture the participants’s perception, and

options about their experience in the experiment.

We have analysed the captured data and questionnaire’s responses. The results

shown that by using our system, the participants accomplished the tasks more

efficiently (i.e., less time) and effectively (i.e., less errors, less helps, and higher

completeness). They also found that it was easy, less knowledge required, and

quick to learn to use our system to accomplish the tasks.

It is important to note that our system is designed to operate with consumer

electronic devices which provide means to control their functions and query their

state via external software services (e.g., web services). The services allows our

system to control the devices on the user’s behalf, to monitor the state changes of

the devices, and to interpret the changes as the user’s direct/indirect actions on the

devices. Although we would like to test our system with real devices controlled

by services, to present a more realistic scenarios to the users, unfortunately, some

of the devices (e.g., TV, lights, and coffee maker) available to us at this time do not

107

Page 125: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

provide software services for invoking their functions. To overcome this problem,

we create a set of simulated devices. We hope that in the near future UPnP1 or

similar standards will become widely accepted allowing deploying our system in

real smart environments. The simulated representation of each device has two

parts. The first contains the device’s states (e.g., if the TV is on or off; the channel

the TV is currently playing; the TV’s volume level, etc.). The second part contains

the specifications of tasks which are supported by the devices like: turn ON/OFF

the TV, change TV’s channel, adjust the TV’s volume, etc.

6.3.1 Measurements

To evaluate the usability of our system from the end-users’ perspectives, we need

some measures. We adopt the measures proposed in ISO 9241-11 [89] which pro-

vides a widely accepted definition of usability measures: efficiency, effectiveness,

satisfaction, ease of use, orientation, and learnability as key measures for evaluat-

ing the usability of a system.

Efficiency: Efficiency reflects the time (which already reflects the time for a partic-

ipant to find a solution to accomplish a task and the number of taps/clicks).

The time to complete the tasks is measured from a user selects to start a task

until he/she presses the last button that completes the task (i.e., achieve the

goal of the task). Our hypothesis for this measure is:

(h1) The time that the users spend to complete the tasks with the use of our system

is less than without using our system.

Effectiveness: Effectiveness reflects the percentage of tasks successfully accom-

plished, the number of errors a participant has made to complete a task, and

how often external help was required to complete a task. An error is defined

as the pressing of a button that does not advance progress on the current

task. Multiple taps/clicks with the same error are counted one only. An ex-

1www.upnp.org

108

Page 126: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

ternal help is any verbal hints from the experimenter. Our hypothesises for

this measure are:

(h2) The number of tasks completed successfully will be greater when using our

system than without using our system.

(h3) The number of errors made while accomplishing the tasks will be less when

using our system than without using our system.

(h4) The number of helps taken while accomplishing the tasks will be less when

using our system than without using our system.

Ease of use: Ease of use reflects a user’s perception of using a system to accom-

plish tasks such as if he/she finds the system easy to use, if he/she finds

the system unnecessarily complex, if he/she finds the system very cumber-

some to use, if he/she feels very confident using the system, and if he/she

would need support of a technical person to be able to user the system. Our

hypothesis for this measure is:

(h5) Our system is easy to use.

Orientation: Orientation is a user’s sense of navigating through the menus of a

system in order to accomplish the tasks. It reflects if the user feels lost in the

system, if he/she knows where to go next, if he/she knows where to start

a task, and if he/she knows where he/she is currently located within the

system. Our hypothesis for this measure is:

(h6) The participants’ sense of orientation within our system is positive.

Satisfaction: Satisfaction reflects that if the user would like to use a system fre-

quently. Our hypothesis for this measure is:

(h7) The participants satisfy with our system.

Learnability: Learnability reflects how quickly and the load of knowledge a user

would learn to use the system. Our hypothesises for this measure are:

(h8) The participants learn quickly to use our system.

109

Page 127: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

(h9) The participants do not need to learn a lot of things to use our system.

6.3.2 Participants

The participants are five males and five females aged ranging from 20 to 50 years

old, mostly from the university community where this research is conducted. The

participants are from different academic fields such as computing, accounting, ed-

ucation, physics, mathematics, and accounting. They are either postgraduates or

undergraduates. All the participants use smartphones frequently (e.g., iPhones).

None of them has used Android smartphones. Note that, because our experimen-

tal system is currently implemented for the Android platform, we want to invite

only inexperienced Android users. This allows us to fairly compare the partici-

pants’s performance on accomplishing the same tasks with and without using our

system.

6.3.3 Experimental Tasks

The tasks we chose to experiment are typical of everyday operation of electronics

devices and services in smart environments. Table 6.3 summarises the experimen-

tal tasks used for this user study. These tasks reflect the diversity of tasks such

as multiple-step tasks (t2, t3, t4, t8, t9, t10), multiple-device tasks (t4, t7, t9, t10), tasks

involving data transfer among devices (t4, t10), and tasks to change settings of de-

vices (t1, t2, t5, t6, t7). The tasks are ordered based on their expected complexity.

The easier tasks are placed on top so that they will be executed first. Note that, we

can see a smartphone itself being a normal device that can be controlled. This is

why in this experiment we have several tasks (e.g., t1, t2, t3, t4) which involve only

smartphones.

No. Task Device Note

t0 Make a cup of espresso using the

simulated coffee maker in the sim-

ulated smart room

Coffee maker Trial task

110

Page 128: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

t1 Switch ON/OFF the Bluetooth on a

smartphone

Smartphone Execute with and

without TASKUI

t2 Change the wallpaper on a smart-

phone

Smartphone Execute with and

without TASKUI

t3 Send an email with a photo at-

tachment on a smartphone using a

Gmail account (Note: the Gmail ac-

count has already been setup on the

smartphone)

Smartphone Execute with and

without TASKUI

t4 Share a photo between two smart-

phones via Bluetooth

Smartphones Execute with and

without TASKUI

t5 Turn On the light in the simulated

smart room

Light Execute with

TASKUI only

t6 Adjust the volume of the TV in the

simulated smart room

TV Execute with

TASKUI only

t7 Adjust the smart room’s brightness

using the light and the window

drape

Light & window

drape

Execute with

TASKUI only

t8 Make a cup of cappuccino using the

simulated coffee maker in the sim-

ulated smart room

Coffee maker Execute with

TASKUI only

t9 Borrow/checkout a book from the

university library using a smart-

phone

Book, ID card,

phone’s camera

Execute with

TASKUI only

t10 Print a pdf document stored on

a smartphone via the simulated

printer in the simulated smart room

Smartphone &

printer

Execute with

TASKUI only

Table 6.3: Experimental tasks for the user study.

As shown on Table 6.3, t0 is a trial task that help the participants have an idea

about how our system works. The participants are asked to execute t1, t2, t3 and t4

111

Page 129: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

twice. In the first round, the participants execute these four tasks without us-

ing our system. In the second round, they execute the same tasks using our

system. This allow us to compare the participants’ task accomplishment under

the two conditions: with and without using our system. We have to select only

these four tasks to be accomplished with and without using our system because

these tasks can actually be achieved without using our system while the other

tasks (i.e., t5, t6, t7, t8, t9, t10) can only be achieved by using our system. The tasks

t5, t6, t7, t8, t9, t10 are included here for the participants to experience on how our

system works. This allows them to express their perception and answer the post-

test questionnaire.

6.3.4 Post-Test Questionnaire

We use the post-test questionnaire to gather the participants’ opinions concerning

the usability of our system. The first five statements (i.e., q1, q2, q3, q4, q5) are similar

to those used in a similar evaluation by Ziefle and Bay [90]. Each of the statements

is rated on a 5-point scale (1=Strongly disagree, 2=Disagree, 3=Neutral, 4=Agree, and

5=Strongly agree). The participants are asked to rate each of these statements twice:

one for their experience without using our system and one for their experience

with the use of our system.

(q1) It was easy for me to accomplish the tasks;

(q2) I felt confident while accomplishing the tasks;

(q3) I knew where to go next while accomplishing a task;

(q4) I knew where to start a given task; and

(q5) I knew where I was while navigating through steps of a task.

The remaining eight statements (i.e., q6, q7, q8, q9, q10, q11, q12, q13) are based on the

System Usability Scale [91]. We use these statements to measure the ease of use,

the satisfaction, and the learnability of our system, and require the users to rate

112

Page 130: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

the degree to which they agree with the statements on a Likert-scale of 1 (strongly

disagree) to 5 (strongly agree). Since positive and negative statements are included

in the questionnaire, responses to negative statements have their scores reversed

for data analysis. For example, if a user rate ‘Disagree’ (i.e., 2) for q6 which is a

negative statement, the score for the reversed statement (’I did not find TASKUI

unnecessarily complex’) is (6− 2) = 4 (i.e., ‘Agree’).

(q6) I would like to use TASKUI frequently;

(q7) I found TASKUI unnecessarily complex;

(q8) I would need support of a technical person to be able to use TASKUI;

(q9) I found the functions of TASKUI were well organised;

(q10) There was too much inconsistency in TASKUI;

(q11) I would imagine that most people would learn to use TASKUI very quickly;

(q12) I found TASKUI very cumbersome to use;

(q13) I needed to learn a lot of things before I could get going with TASKUI.

6.3.5 Procedure

We ask the participants to attend the experiment one by one. The experiment for

each of the participants includes three phases: training phase, experiment phase,

and questionnaire phase.

Training phase: Each participant was given a warm-up period of approximately

five minutes to explore the testing smartphone and our system. The smart-

phone is an Android smartphone which acts as a normal device to be con-

trolled and as a hosting computer that runs our system (i.e., the smartphone

becomes a remote controller). The participant is presented with several sim-

ulated devices (e.g., a TV, a coffee maker, a light, and a window drape) which

he/she will interact with in the experiment phase. We show the participant

113

Page 131: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

how the system works and how to use it by demonstrating the accomplish-

ment of a sample task: ‘Make Espresso’ (i.e., Task t0 in Table 6.3). During ac-

complishing the training task, the participant is allowed to ask questions.

This phase allows the participant to become familiar with the operation of

our system and the goal of the experiment.

Experiment phase: Each participant was asked to accomplish the tasks t1, t2, t3, t4

twice: the first round without using our system and the second round with

the use of our system. While the participant is accomplishing a task, we

video-recorded his/her performance for later analysis. We then ask the par-

ticipant to accomplish the tasks t5 to t10. These later tasks allow the partic-

ipant to experience our system that will help him/her to answer the ques-

tionnaire.

Questionnaire phase: We ask the participant to complete the questionnaire which

reflect his/her experience and perception of our system.

6.3.6 Results

Table 6.4 summarises the participants’ performance under the two experimental

conditions (i.e., with and without using our system) for the first four tasks (i.e.,

t1 to t4) in the experiment. Figure 6.1 is a graphical presentation of the numbers

shown in this table. Note that, if a participant could not complete a task, the time,

errors, and helps are set as the maximum values for the same task completed by

all other participants.

With TASKUI Without TASKUI

Total time (seconds) 1241 1761

Completeness 100% 90%

Total errors 9 65

Total helps 11 25

Table 6.4: The participants’ performance of the tasks.

114

Page 132: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

0

500

1000

1500

2000

WithTASKUI

WithoutTASKUI

WithTASKUI

WithoutTASKUI

To

tal

tim

e(s

eco

nd

s)

0%

20%

40%

60%

80%

100%

WithTASKUI

WithoutTASKUI

WithTASKUI

WithoutTASKUI

Co

mp

lete

nes

s

(a) (b)

0

20

40

60

80

WithTASKUI

WithoutTASKUI

WithTASKUI

WithoutTASKUI

To

tal

erro

rs

0

5

10

15

20

25

30

WithTASKUI

WithoutTASKUI

WithTASKUI

WithoutTASKUI

To

tal

hel

ps

(c) (d)

Figure 6.1: The participants’s performance with and without TASKUI.

Task 1 Task 2 Task 3 Task 4 Average

With TASKUI 8.2 16.9 69.7 29.3 31.025

Without TASKUI 18.2 40.7 82.8 34.4 44.025

Table 6.5: Time (in seconds) spent to accomplish the tasks.

0

10

20

30

40

50

60

70

80

90

Task 1 Task 2 Task 3 Task 4 Average

Tim

e(s

eco

nd

s)

With TASKUI

Without TASKUI

Figure 6.2: The average time spent for each of the tasks.

115

Page 133: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

6.3.6.1 Efficiency

Table 6.5 shows the average time spent for each of the tasks by all the participants

under the two experimental conditions (i.e., with and without using our system).

Figure 6.2 is a visualisation of the numbers shown in this table.

Table 6.4(a) and Table 6.5 show that the time that the users spend to complete

the tasks with the use of our system is significantly less than without using our

system. Therefore, the hypothesis h1 is acceptance. In other word, the users would

accomplish their tasks more efficiently when using our system.

6.3.6.2 Effectiveness

Task 1 Task 2 Task 3 Task 4 Average

With TASKUI 0 0.3 0.5 0.1 0.225

Without TASKUI 1.6 2.6 1.6 0.7 1.625

Table 6.6: Number of errors made during accomplishing the tasks.

0

0.5

1.0

1.5

2.0

2.5

3.0

Task 1 Task 2 Task 3 Task 4 Average

Av

erag

en

um

ber

of

erro

rs

With TASKUI

Without TASKUI

Figure 6.3: Number of errors made with and without TASKUI.

Task 1 Task 2 Task 3 Task 4 Average

With TASKUI 0 0.4 0.6 0.1 0.275

Without TASKUI 0.2 0.5 1.3 0.5 0.625

Table 6.7: Number of helps taken during accomplishing the tasks.

Figure 6.1.b, 6.3 and 6.4 show that:

• the number of tasks completed successfully when using our system is greater

than without using our system, therefore the hypothesis h2 is accepted;

116

Page 134: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

Task 1 Task 2 Task 3 Task 4 Average

Av

erag

en

um

ber

of

hel

ps

With TASKUI

Without TASKUI

Figure 6.4: Number of helps taken with and without TASKUI.

• the number of errors made while accomplishing the tasks when using our

system is significantly less than without using our system, therefore the hy-

pothesis h3 is accepted; and

• the number of helps taken while accomplishing the tasks when using our

system is significantly less than without using our system, therefore the hy-

pothesis h4 is accepted.

The results of the experiment indicate that the participants made fewer errors

and asked for help less using our system than using the actual user interfaces of

the devices (see Figure 6.3.c and 6.4.d). This indicates that our system were more

intuitive to use than the actual user interfaces. In conclusion, our system would

assist the users to accomplish their tasks more efficiently.

6.3.6.3 Ease of Use

Easy of use Orientation Satisfaction Learnability

With TASKUI 3.99 4.3 4.1 4.35

Without TASKUI 3.35 3.23 n/a n/a

(1=Strongly disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly agree)

Table 6.8: The participants’ responses to the usability questionnaire.

The participants’ perception of the ease of use of our system is determined by

averaging the participants’ responses from the post-test questionnaire relating to

117

Page 135: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

0

1

2

3

4

5

Easy ofuse

Orientation Satisfaction Learnability

Strongly agree =

Agree =

Neutral =

Disagree =

Strongly disagree =

With TASKUI Without TASKUI

Figure 6.5: The participants’ responses to the usability questionnaire.

the ease of use which includes the questions q1 and q2 without using TASKUI and

the questions q1, q2, q7, q8, q9, q10, and q12. Table 6.8 and 6.5 summarise the partic-

ipants’ responses to the questionnaire that reflects their perception and opinions

about accomplishing tasks with and without using our system.

According to Table 6.8 and Figure 6.5, the participants agreed that our system

was easy to use for accomplishing the tasks. Therefore the hypothesis h5 is ac-

cepted.

6.3.6.4 Orientation

The participant’s sense of orientation while accomplishing the tasks is reflected

by the questions q2, q3, and q4. Averaging the responses to these questions (see

Table 6.8 and Fifure 6.5) shows that the participants sense of orientation while ex-

ecuting the tasks via our system was positive. Therefore, the hypothesis h6 is ac-

cepted.

6.3.6.5 Satisfaction

The participants’s satisfaction of using TASKUI is reflected by the question q6. The

participants’s average response to this question (see Table 6.8 and Fifure 6.5) shows

that the participants satisfied using TASKUI for accomplishing the tasks (the score

is 4.1, more than agree). Therefore, our hypothesis h7 is accepted.

118

Page 136: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

6.3.6.6 Learnability

The user learnability is reflected by the participants’ responses to the questions q11

and q13. The results (see Figure 6.5) show that the participants would learn to use

our system quickly and the load of knowledge required to operate the system was

not much. Therefore, the hypothesises h8 and h9 are accepted.

6.4 Evaluation of Task Suggestion Mechanism

Because the number of tasks which are supported by a smart environment can be

massive, to make it easy for a user to navigate to an available task which he/she

can achieve in a current smart environment, we have proposed a task suggestion

mechanism (see Section 4.2.6 for the concept and Section 5.4 for an implementa-

tion). The task suggestion is a feature of the system that tries to suggest available

tasks which are relevant to the user’s context (e.g., the user’s location).

To evaluate the practicability, efficiency, and effectiveness of the task sugges-

tion mechanism, we conducted a user survey in which we asked the users to sug-

gest tasks for particular scenarios. We analysed the responses and found that there

were always common tasks for a particular scenario; especially there were many

tasks which were commonly suggested by most of the users. This verifies that the

commonsense-based task suggestion mechanism is practical. We also compared

the users’ suggested tasks with the system’s suggested tasks to measure the ef-

fectiveness of our task suggestion mechanism. The result shows that all of the

system’s suggested tasks were included in the users’ suggested tasks, especially

on average more than 50% of the system’s suggested tasks were also suggested by

all the users.

In this section, we present how this user survey was conducted, how we anal-

ysed the data, and the presentation of the results.

119

Page 137: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

6.4.1 Participants

The participants were five males and five females from the student community

where this research was conducted.

6.4.2 Procedures

We asked each of the participants to play a role of a postgraduate at a university.

We asked the participants to answer a questionnaire which had two parts.

6.4.2.1 Part One: Participants Suggest Tasks

The aim of this part was to find out the tasks that the users would be likely to

accomplish in each of the scenarios. We wanted to know weather or not there

are tasks which are common among the users in a particular scenario. If there

are common tasks, the task suggestion mechanism is practical; otherwise the task

suggestion mechanism is not practical to be implemented. This is because the task

suggestion mechanism is based the common sense of tasks to reduce the massive

number of available tasks in a user’s particular context. A task which is very com-

mon for a great percentage of users will be included in the suggestion list while a

task which is very uncommon will not be included.

No. Scenario#1 You are about leaving your smart home and heading to the smart university

by a smart car. It is winter and really cold.#2 You’ve arrived in your personal smart office at the smart university and about

to start a working day. Your personal office has a printer, a TV, a coffee maker,a heater, a ceiling light, and a window drape. These devices are controllable.It is winter and really cold.

#3 You are pointing your hand at the smart TV in your office and about to com-mand it to do some tasks for you.

#4 You are at the university’s smart library.#5 You are in your smart office. You grasp your smartphone and are about to do

some tasks with it.

Table 6.9: The scenarios used in the user survey.

In this part, we provided each of the participants with five imagined scenarios.

For each of the scenarios, we asked the participant to write down at most five

120

Page 138: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

tasks in an order which he/she would be likely to do in such the scenario. We

also mentioned to the participant that we were interested in tasks which involve

controlling electronic devices and appliances. Table, 6.9 includes the scenarios we

provided to the participants in this user study. The complete questionnaire used

for this part of the user survey is included in the Appendix B.

6.4.2.2 Part Two: Participants Re-order Tasks Suggested by Our System

The aim of this part is to evaluate the efficiency and effectiveness of our task sug-

gestion mechanism. In this part, we provided the participants five ordered lists of

tasks each of which was suggested by our system for each of the scenarios men-

tioned in the previous part (see Table 6.10). The system generated these task lists

based on the authors’ experience which may or may not convey the common sense.

However, these task lists can be later improved by incorporating the results of the

previous part. We asked the participants to re-order the task lists according to

their preferences. The complete questionnaire used for this part of the user survey

is included in the Appendix C.

Scenario#1

Tasks suggested by TASKUI TASKUI’s order

Turn the house’s heater off 1

Turn the car’s heater on 2

Check for weather forecast 3

Lock the house’s doors 4

Scenario#2

Tasks suggested by TASKUI TASKUI’s order

Adjust room’s brightness 1

Turn on/adjust the heater 2

Make coffee 3

Play music 4

Read news 5

121

Page 139: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Scenario#3

Tasks suggested by TASKUI TASKUI’s order

Turn ON/OFF the TV 1

Change the channel 2

Adjust the volume 3

Record a program 4

Play a recorded program 5

Scenario#4

Tasks suggested by TASKUI TASKUI’s order

Search and locate book(s) 1

Borrow a book(s) 2

Return book(s) 3

Book a study carrel 4

Find a friend 5

Scenario#5

Tasks suggested by TASKUI TASKUI’s order

Make a call 1

Send a text message 2

Send an email 3

Take a photo 4

Connect to Wi-Fi 5

Table 6.10: The tasks suggested by the system for each of the scenarios.

6.4.3 Results

6.4.3.1 Part One

We illustrate the results of Part one of the study in the following five tables (see Ta-

bles 6.11-6.15), each of which includes the tasks which were suggested by the par-

ticipants for the corresponding scenario. The second column of each table presents

the commonness of a task (e.g., a percentage of the participants who suggested this

task for the scenario in question).

122

Page 140: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Tasks Common

Turn off the house’s heaters 70%

Turn on the car’s heater 70%

Turn off the house’s lights 60%

Look the doors 30%

Start the car’s engine 30%

Make hot coffee/tea 20%

Play radio/music in the car 20%

Check weather 20%

Make breakfast 10%

Turn off the stove 10%

Turn off the bathroom’s hot water 10%

Turn off the water taps 10%

Set time for the heater to auto-start 10%

Check the house’s security 10%

Stop music in the house 10%

Start the dishwasher 10%

Turn off all electric devices in the house 10%

Check public transport timetable 10%

Open/unlock the car’s door 10%

Turn off the desktop computer 10%

Find my bag, computer, keys... 10%

Table 6.11: Tasks suggested by participants for Scenario #1

Tasks Common

Turn on the room’s heater 100%

Turn on the rooms’ lights 90%

Make coffee 80%

Open/adjust window drape 70%

123

Page 141: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Turn on the computer 60%

Print documents 20%

Check missed called on the land-line phone 10%

Open the room’s door 10%

Table 6.12: Tasks suggested by participants for Scenario #2

Tasks Common

Switch to a channel 100%

Adjust the volume 90%

Turn on/off TV 80%

Adjust the TV’s brightness and contrast 20%

Record a program 20%

Table 6.13: Tasks suggested by participants for Scenario #3

Tasks Common

Borrow/checkout books 100%

Look/search for books 80%

Return books 60%

Book a study carrel 40%

Find a book’s location (book-shelf) 30%

Print documents 20%

Book a discussion/meeting room 20%

Find a water station 10%

Find an available study table 10%

Find a toilet 10%

Renew books 10%

Check borrow transactions 10%

Send an SMS 10%

Return a carrel key 10%

124

Page 142: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Table 6.14: Tasks suggested by participants for Scenario #4

Tasks Common

Check emails 70%

Open Facebook 40%

Send SMS 40%

Make calls 30%

Send emails 30%

Connect to the Wi-Fi 20%

Check weather 20%

Print documents 20%

Open the web browser to read news 20%

Update applications 10%

Search for applications 10%

Open an application 10%

Look for a contact number 10%

Open a document file 10%

Take photos 10%

Check notifications 10%

Play music 10%

Check the car’s lock remotely and lock if it’s not been locked 10%

Check the car’s lights remotely and turn them off if they are on 10%

Transfer files from the phone to a computer 10%

Send photos to friends 10%

Play games 10%

Table 6.15: Tasks suggested by participants for Scenario #5

The results show that there are common tasks for each of the questioned sce-

narios. The common tasks which were suggested by more than 50% of the partic-

125

Page 143: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

ipants are highlighted (in bold) in Tables 6.11-6.15. This indicates that suggesting

common tasks based on a user’s context is practical and would be useful. Similar

studies can be conducted to form common tasks in other interested scenarios.

6.4.3.2 Part Two

In Table 6.16, the column r denotes the orders suggested by TASKUI while r′ de-

notes the average orders suggested the participants. The first column is the sce-

nario numbers. The line charts visualise the trends and differences between the

orders sorted by TASKUI and the orders sorted by the participants for the sug-

gested tasks. The results and corresponding visualisations in Table 6.16 show that

TASKUI and the participants ordered the tasks similarly, especially with Scenarios

#2, #3, and #4.

#1

Tasks suggested by TASKUI r r′

Turn the house’s heater off 1 1.8

Turn the car’s heater on 2 3.3

Check for weather forecast 3 1.9

Lock the house’s doors 4 2.9

Task 1 Task 2 Task 3 Task 4

1

2

3

4

Ord

er

TASKUI’s order Participants’s order

#2

Tasks suggested by TASKUI r r′

Adjust room’s brightness 1 1.0

Turn on/adjust the heater 2 2.2

Make coffee 3 3.3

Play music 4 4.1

Read news 5 4.1

Task 1 Task 2 Task 3 Task 4 Task 5

1

2

3

4

5

Ord

er

TASKUI’s order Participants’s order

126

Page 144: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

#3

Tasks suggested by TASKUI r r′

Turn ON/OFF the TV 1 1.0

Change the channel 2 2.3

Adjust the volume 3 2.7

Record a program 4 4.1

Play a recorded program 5 4.8

Task 1 Task 2 Task 3 Task 4 Task 5

1

2

3

4

5

Ord

er

TASKUI’s order Participants’s order

#4

Tasks suggested by TASKUI r r′

Search and locate book(s) 1 1.6

Borrow book(s) 2 2.6

Return book(s) 3 3.0

Book a study carrel 4 3.1

Find a friend 5 4.2

Task 1 Task 2 Task 3 Task 4 Task 5

1

2

3

4

5

Ord

er

TASKUI’s order Participants’s order

#5

Tasks suggested by TASKUI r r′

Make a call 1 2.3

Send a text message 2 2.7

Send an email 3 2.8

Take a photo 4 4.2

Connect to Wi-Fi 5 2.5

Task 1 Task 2 Task 3 Task 4 Task 5

1

2

3

4

5

Ord

er

TASKUI’s order Participants’s order

Table 6.16: Participants’s re-ordering of the tasks.

Figure 6.6 shows the differences in percentages between the orderings pro-

posed by TASKUI and the orderings proposed by the participants. Accordingly,

the differences are ‘ acceptable (the average difference is 20%). This confirms that

the tasks suggested by TASKUI almost meet the user’s expectations.

127

Page 145: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

0%

20%

40%

60%

80%

100%

Scenario#1 Scenario#2 Scenario#3 Scenario#4 Scenario#5 Average

Figure 6.6: Differences between TASKUI’s and the participants’ task orderings.

6.4.3.3 Combining Part One and Part Two

Recall that in the first part of this user study, we asked the participants to suggest

the tasks for different scenarios. The participants’ suggested tasks were matched

with TASKUI’s suggested tasks provided in the second part to understand the ef-

fectiveness of our task suggestion mechanism (see Table 6.17, the “Matched” col-

umn indicates the percentage of the participants who suggested the corresponding

task). The results show that most of the tasks suggested by TASKUI are the partic-

ipants’ expectations, especially some of the tasks are matched with high rates.

Scenario#1

Tasks suggested by TASKUI Matched

Turn the house’s heater off 70%

Turn the car’s heater on 70%

Check for weather forecast 20%

Lock the house’s doors 30%

Scenario#2

Tasks suggested by TASKUI Matched

Turn on light and open window drape 80%

Turn on/adjust the heater 80%

Make coffee 100%

Play music 0%

Read news 0%

128

Page 146: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Scenario#3

Tasks suggested by TASKUI Matched

Turn ON/OFF the TV 80%

Change the channel 100%

Change the volume 90%

Record a program 20%

Play a recorded program 0%

Scenario#4

Tasks suggested by TASKUI Matched

Search and locate book(s) 80%

Borrow a book(s) 100%

Return book(s) 60%

Book a study carrel 40%

Find a friend 0%

Scenario#5

Tasks suggested by TASKUI Matched

Make a call 30%

Send a text message 40%

Send an email 30%

Take a photo 10%

Connect to Wi-Fi 20%

Table 6.17: Matching TASKUI’s and participants’ task suggestions.

Figure 6.7 shows the average matches of tasks suggested by TASKUI and by

participants for each of the scenarios. The results show that on average each of the

tasks suggested by TASKUI is also suggested by almost 50% of the participants.

6.5 Applications of TASKCOM in Teaching Activities

TASKCOM can be applied in learning activities where a teacher asks the students

to accomplish some tasks, observes the progress of the students’ accomplishment,

and provides help remotely. For this purpose, the task specification schema can

129

Page 147: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

0%

20%

40%

60%

80%

100%

Scenario#1 Scenario#2 Scenario#3 Scenario#4 Scenario#5 Average

Figure 6.7: Matching TASKUI’s and the participants’ task suggestions.

be customised to allow specifying other constraints of learning tasks such as time

and location. For example, a learning task in a biology laboratory requires the

students to accomplish a sequence of sub-tasks which involve the use of facilities

and appliances in the laboratory. Some of the sub-tasks may be accomplished by

directly manipulating the appliances and reporting the results using TASKUI on a

mobile computer, while other sub-tasks can be accomplished via TASKUI.

6.6 Applications of TASKCOM in AAL

Ambient Assisted Living [92] (AAL) refers to electronic environments that are sen-

sitive and responsive to the presence of people and provide assistive propositions

for maintaining independent living. Our TASKCOM can be applied in AAL for

providing users (e.g., elderly people) with assistance in accomplishing their daily

tasks. The type of assistance which can be provided by TASKCOM is task guid-

ance. Task guidance is step-by-step instructions of how to accomplishing a task.

Task guidance is generated based on task model specifications which can be pro-

vided by carers. Moreover, because TASKCOM provides mechanisms for dis-

tributed collaborations and observations in accomplishing tasks, carers can pro-

vide an end-user with remote assistance, observe the accomplishment of a task,

and provide helps when the end-user needs.

130

Page 148: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Chapter 7Conclusion and Research Directions

In this thesis, we have described the shortcomings of the current application-

centric paradigm and its function-based user interfaces when applied to smart en-

vironments. We have presented our solution to the usability problem of smart en-

vironments. We has proposed to apply the concept of the task-oriented paradigm

for interacting with smart environments. Our primary aim is to provide mobile

users a system which helps them to effectively and efficiently accomplish their

tasks of using electronic devices and services in smart environments. We have

presented a task-oriented computing framework (TASKCOM) as an implementa-

tion of the task-oriented paradigm concept.

From the end-users’ perspective, TASKCOM allows the users to operate de-

vices in a smart environment at task level, rather than at the level of applications

and devices’ functions. To achieve this, TASKCOM provide the users a task-based

user interface (TASKUI) which allows them to interact with smart environments in

term of tasks. TASKUI can suggest the users tasks based on their context and pro-

vide the users task guidance to free them from the mental activities of the problem

solving process. Our evaluation have shown that when a user accomplishes the

same tasks with the help of TASKCOM, many mental activities can be eliminated;

these mental activities are always required when accomplishing the tasks with

the application-centric paradigm. The user experiment which measured and com-

pared the usability of TASKUI with traditional user interfaces shows that by using

131

Page 149: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

TASKUI, the participants tended to accomplish tasks efficiently. For example, they

spent less time, less thinking or cognitive activities, less knowledge learnt, fewer

errors, fewer helps to complete the tasks.

From the developers’ perspective, TASKCOM provides a high-level language

that allows developers to rapidly program tasks. The framework also provide

a methodology and a tool for development and deployment of the task-oriented

paradigm in smart environments.

We contend that the notion of the task provides users a much higher level of

abstraction than individual applications. We think that such a task abstraction

can apply more broadly to everyday tasks that users want to perform in their re-

spective spaces, and to things users want to do with their everyday appliances.

As smart environments become more complex, we believe there is a need for a

fundamental paradigm shift from “how to do” a task to “what to do”. The frame-

work described in this thesis is our attempt in supporting such a shift, at least

from the end-users’ perspective. While we do not think the application-centric

paradigm will be replaced soon, the task-oriented paradigm can be built over

existing applications (i.e., applications expose their functions as services which

can be incorporated into our task specifications). Our prototype implementation

has demonstrated the feasibility and advantages of the task-oriented computing

paradigm. We also contend that while the task computing notion, as reviewed

earlier, has been considered in other work, our work is novel in terms of (i) com-

prehensiveness: TASKCOM supports the full range of the task-oriented comput-

ing ideal (e.g., modelling and specifying tasks, suggesting tasks based on user’s

context, executing tasks, and collaborating between distributed users for accom-

plishing tasks), (ii) broad applicability: we argue for the task-oriented computing

paradigm for smart environments, including cities, shopping centres, rooms, cars,

individual appliances, and personal virtual spaces and (iii) extensible open source

framework which includes a Java-based implementation of the task server appli-

cation, an implementation of TASKUI for the Android mobile platform, and an

RNC-based task specification language.

132

Page 150: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

For future research directions, there are several aspects that we can investigate

further:

Contextualising the execution of a task: When a task is executed or resumed in

a new environment, all service bindings should be resolved based on avail-

able appliances and services in the environment. However, there are some

cases which the user may not want to contextualise their task. For example,

the user wants to remotely turn off the heater at home which he/she forgot

to turn off when he/she left home. If this task is contextualised, it will turn

off the heater in the user’s office instead of the one at home. But in another

scenario where the user suspends the task of viewing television at home and

resumes it in his/her office, the task contextualisation will ensure that the

office’s television (not the home’s one) is turned on and set to the same set-

tings.

Automatic generation of task specifications: Task specifications can be automati-

cally created by end-user demonstration or by analysing online “ehow” data

(e.g., ehow.com or 43Things.com). The user should be able to generate a new

task specification by demonstrating a sequence of actions. The system can

monitor a user’s interactions with appliances in an environment, asks the

user to label it as a task for future use.

Speech-based user interface for TASKUI: Our current implementation of TASKCOM

supports speech input only for task search while the user interaction for

task execution is graphical and text-based. To improve user experience with

TASKCOM, we’d like to add speech-based interaction for task execution. For

example, instead of showing a graphical user interface where the user can

move a slider to set the volume for a television, TASKUI should accept the

user’s speech telling it like “seventy”, “up up”, “down down”, or “mute”.

Trouble-shooting and explanations: The system should provide trouble-shooting

and explanations when problems occur while executing a task. This fea-

ture could be added to TASKCOM using, e.g., Roadie [5]. Roadie has an AI

133

Page 151: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

partial-order planner based on a commonsense knowledge base to provide

mixed-initiative assistance and debugging help when things go wrong.

Graphical tools for modelling tasks: Graphical tools can be developed for effi-

ciently and visually creating, composing, validating, and testing task models.

Task migration: The current implementation of TASKOS only supports migrating

the user interfaces of tasks across environments and hosting mobile comput-

ers. However, TASKOS does not support re-binding of required device func-

tions and services to the actual functions and services which are available in

a local smart environment. This is because of the static bindings of functions

and services in task specifications. The dynamic service and function bind-

ings can be achieved by abstractly specifying required functions and services

in task specifications. At runtime, the system resolves the actual bindings by

using reasoning mechanisms such as the ontology-based reasoning proposed

in the Olympus framework [59].

In summary, we believe that TASKCOM is a further step towards coping with

the increasing complexity of smart environments. Our reference implementation

of TASKCOM and examples of task model specifications are available by contact-

ing the authors or accessible at https://github.com/ccvo/taskcom/.

134

Page 152: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Appendix AQuestionnaire 1: Participants’Opinions about TASKUI

Please rate the following statements about accomplish-

ing the given tasks WITHOUT using TASKUI:

(q1) It was easy for me to accomplish the tasks.

Strongly disagree Disagree Neutral Agree Strongly agree

(q2) I felt confident while accomplishing the tasks.

Strongly disagree Disagree Neutral Agree Strongly agree

(q3) I knew where to go next while accomplishing a task.

Strongly disagree Disagree Neutral Agree Strongly agree

(q4) I knew where to start a given task.

Strongly disagree Disagree Neutral Agree Strongly agree

(q5) I knew where I was while navigating through steps of a task.

Strongly disagree Disagree Neutral Agree Strongly agree

Please rate the following statements about the usability

of TASKUI:

(q1) TASKUI was easy for me to accomplish the tasks.

Strongly disagree Disagree Neutral Agree Strongly agree

135

Page 153: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

(q2) I felt confident while using TASKUI to accomplish the tasks

Strongly disagree Disagree Neutral Agree Strongly agree

(q3) I knew where to go next while accomplishing a task.

Strongly disagree Disagree Neutral Agree Strongly agree

(q4) I knew where to start a given task.

Strongly disagree Disagree Neutral Agree Strongly agree

(q5) I knew where I was while navigating through steps of a task.

Strongly disagree Disagree Neutral Agree Strongly agree

(q6) I would like to use TASKUI frequently.

Strongly disagree Disagree Neutral Agree Strongly agree

(q7) I found TASKUI unnecessarily complex.

Strongly disagree Disagree Neutral Agree Strongly agree

(q8) I would need support of a technical person to be able to use TASKUI.

Strongly disagree Disagree Neutral Agree Strongly agree

(q9) I found the functions of TASKUI were well organised.

Strongly disagree Disagree Neutral Agree Strongly agree

(q10) There was too much inconsistency in TASKUI.

Strongly disagree Disagree Neutral Agree Strongly agree

(q11) I would imagine that most people would learn to use TASKUI very quickly.

Strongly disagree Disagree Neutral Agree Strongly agree

(q12) I found TASKUI very cumbersome to use.

Strongly disagree Disagree Neutral Agree Strongly agree

(q13) I needed to learn a lot of things before I could get going with TASKUI.

Strongly disagree Disagree Neutral Agree Strongly agree

136

Page 154: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Appendix BQuestionnaire 2: Suggesting Tasks

Instruction

Imagine that you were a postgraduate at a university. For each of the followingscenarios, what tasks/goals do you think you will be likely to do? Write downat most 5 tasks. We are interested in tasks which involve controlling electronicdevices/appliances.

Example

Scenario 0 Your likely tasks

You’ve got out of the bed andabout to do routine tasks asusual.

1. Turn the light on

2. Play favourite music

3. Turn bathroom’s hot water on

4. Prepare breakfast

5. Make coffee

The Scenarios and Your Likely Tasks

Scenario 1 Your likely tasks

You are about leaving homeheading to the university. Itis winter and really cold.

1.

2.

3.

4.

5.

137

Page 155: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Scenario 2 Your likely tasks

You’ve arrived in your per-sonal office at the universityand about to start a workingday. Your personal office hasa printer, a TV, a coffee maker,a heater, a ceiling light, anda window drape. It is winterand really cold.

1.

2.

3.

4.

5.

Scenario 3 Your likely tasks

You are pointing your handat the TV in your office andabout to command it to dosome tasks for you.

1.

2.

3.

4.

5.

Scenario 4 Your likely tasks

You are at the university’s li-brary.

1.

2.

3.

4.

5.

Scenario 5 Your likely tasks

You are in your office. Yougrasp your smartphone andare about to do some taskswith it.

1.

2.

3.

4.

5.

138

Page 156: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Appendix CQuestionnaire 3: ParticipantsRe-ordering Suggested Tasks

Instruction

Imagine that you were a postgraduate at a university. For each of the follow-ing scenarios, our TASKUI system automatically suggests you several tasks/goals.Please place these suggested tasks in an order of likeliness that you think you willdo the task. You can mark a task as ‘n/a’ (i.e., no answer) if you think that youwill not do that task or that task is not applicable for you.

Example

Scenario 0

You’ve got out of thebed and about to do rou-tine tasks as usual.

Tasks suggested by TASKUI Your order

1. Turn the light on 2

2. Play favourite music 1

3. Turn on bathroom’s hot water n/a

4. Prepare breakfast 4

5. Make coffee 3

The Scenarios and Your Orders of Suggested Tasks

Scenario 1

You are about leavinghome heading to theuniversity. It is winterand really cold.

Tasks suggested by TASKUI Your order

1. Turn the house’s heater off

2. Turn the car’s heater on

3. Check for weather forecast

4. Lock house’s doors and windows

139

Page 157: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Scenario 2

You’ve arrived in your per-sonal office at the universityand about to start a workingday. Your personal office hasa printer, a TV, a coffee maker,a heater, a ceiling light, anda window drape. It is winterand really cold.

Tasks suggested by TASKUI Your order

1. Adjust room’s brightness

2. Turn on/adjust the heater

3. Make coffee

4. Play music

5. Read news

Scenario 3

You are pointing your handat the TV in your office andabout to command it to dosome tasks for you.

Tasks suggested by TASKUI Your order

1. Turn ON/OFF the TV

2. Change the channel

3. Adjust the volume

4. Record a program

5. Play a recorded program

Scenario 4

You are at the university’s li-brary.

Tasks suggested by TASKUI Your order

1. Search and locate book(s)

2. Borrow book(s)

3. Return book(s)

4. Book a study carrel

5. Find a friend

Scenario 5

You are in your office. Yougrasp your smartphone andare about to do some taskswith it.

Tasks suggested by TASKUI Your order

1. Make a call

2. Send a text message

3. Send an email

4. Take a photo

5. Connect to Wi-Fi

140

Page 158: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Appendix DExperiment Data

D.1 Raw Data

Table D.1: Participant #1’s performance with TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 8 22 72 19 121

Number of clicks 3 6 14 8 31

Number of errors 0 0 2 1 3

Number of helps 0 0 1 1 2

Table D.2: Participant #1’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 11 50 66 38 165

Number of clicks 5 8 9 8 30

Number of errors 2 1 0 1 4

Number of helps 0 0 1 2 3

Table D.3: Participant #2’s performance with TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 9 12 110 18 149

141

Page 159: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Number of clicks 3 6 10 8 27

Number of errors 0 0 1 0 1

Number of helps 0 0 1 0 1

Table D.4: Participant #2’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 12 18 154 75 259

Number of clicks 3 5 20 13 41

Number of errors 0 0 4 3 7

Number of helps 0 0 3 0 3

Table D.5: Participant #3’s performance with TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 5 22 50 22 99

Number of clicks 3 9 13 8 33

Number of errors 0 3 0 0 3

Number of helps 0 1 0 0 1

Table D.6: Participant #3’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 5 25 55 25 110

Number of clicks 2 6 9 6 23

Number of errors 0 0 0 0 0

Number of helps 0 0 1 0 1

Table D.7: Participant #4’s performance with TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

142

Page 160: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Time (seconds) 7 25 66 23 121

Number of clicks 3 7 13 8 31

Number of errors 0 0 0 0 0

Number of helps 0 2 0 0 2

Table D.8: Participant #4’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X 75%

Time (seconds) 23 max 74 30 177

Number of clicks 13 max 9 7 39

Number of errors 11 max 0 0 16

Number of helps 0 1 1 0 2

Table D.9: Participant #5’s performance with TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 12 15 79 44 150

Number of clicks 3 6 17 9 35

Number of errors 0 0 2 0 2

Number of helps 0 1 1 0 2

Table D.10: Participant #5’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X 75%

Time (seconds) 9 max 120 25 204

Number of clicks 2 max 17 5 34

Number of errors 0 max 5 0 10

Number of helps 0 max 1 0 2

Table D.11: Participant #6’s performance with TASKUI.

143

Page 161: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 9 14 70 31 124

Number of clicks 3 4 12 8 27

Number of errors 0 0 0 0 0

Number of helps 0 0 1 0 1

Table D.12: Participant #6’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 50 42 75 15 182

Number of clicks 4 10 14 5 33

Number of errors 2 5 2 0 9

Number of helps 0 1 1 0 2

Table D.13: Participant #7’s performance with TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 6 14 38 30 88

Number of clicks 3 6 12 9 30

Number of errors 0 0 0 0 0

Number of helps 0 0 0 0 0

Table D.14: Participant #7’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 4 36 60 15 115

Number of clicks 2 6 9 5 22

Number of errors 0 0 0 0 0

Number of helps 0 0 1 0 1

144

Page 162: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Table D.15: Participant #8’s performance with TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 7 15 79 30 131

Number of clicks 3 6 2 8 29

Number of errors 0 0 0 0 0

Number of helps 0 0 1 0 1

Table D.16: Participant #8’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 26 36 75 38 175

Number of clicks 3 5 14 6 28

Number of errors 1 0 2 0 3

Number of helps 1 0 1 0 2

Table D.17: Participant #9’s performance with TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 13 21 83 39 156

Number of clicks 3 6 12 8 29

Number of errors 0 0 0 0 0

Number of helps 0 0 1 0 1

Table D.18: Participant #9’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X 75%

Time (seconds) 32 max 72 15 169

Number of clicks 2 max 12 5 29

Number of errors 0 max 2 0 7

Number of helps 1 max 1 0 3

145

Page 163: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Table D.19: Participant #10’s performance with TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X X 100%

Time (seconds) 6 9 50 37 102

Number of clicks 3 4 12 8 27

Number of errors 0 0 0 0 0

Number of helps 0 0 0 0 0

Table D.20: Participant #10’s performance without TASKUI.

Task 1 Task 2 Task 3 Task 4 Total

Completed? X X X 75%

Time (seconds) 10 max 77 68 205

Number of clicks 2 max 11 7 30

Number of errors 0 max 1 3 9

Number of helps 0 max 2 3 6

D.2 Synthetic Data

D.2.1 Time

Table D.21: Time (in seconds) spent to complete Task 1.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Without TASKUI 11 12 5 23 9 50 4 26 32 10 18.20

With TASKUI 8 9 5 7 12 9 6 7 13 6 8.20

Table D.22: Time (in seconds) spent to complete Task 2.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Without TASKUI 50 18 25 50 50 42 36 36 50 50 40.70

With TASKUI 22 12 22 25 15 14 14 15 21 9 16.90

Table D.23: Time (seconds) spent to complete Task 3.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

146

Page 164: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Without TASKUI 66 154 55 74 120 75 60 75 72 77 82.80

With TASKUI 72 110 50 66 79 70 38 79 83 50 69.70

Table D.24: Time (seconds) spent to complete Task 4.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Without TASKUI 38 75 25 30 25 15 15 38 15 68 34.40

With TASKUI 19 18 22 23 44 31 30 30 39 37 29.30

D.2.2 Clicks

Table D.25: Number of clicks taken to complete Task 1.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Without TASKUI 5 3 2 13 2 4 2 3 2 2 3.80

With TASKUI 3 3 3 3 3 3 3 3 3 3 3.00

Table D.26: Number of clicks taken to complete Task 2.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Without TASKUI 8 5 6 10 10 10 6 5 10 10 8.00

With TASKUI 6 6 9 7 6 4 6 6 6 4 6.00

Table D.27: Number of clicks taken to complete Task 3.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Without TASKUI 9 20 9 9 17 14 9 14 12 11 12.40

With TASKUI 14 10 13 13 17 12 12 12 12 12 12.70

Table D.28: Number of clicks taken to complete Task 4.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Without TASKUI 8 13 6 7 5 5 5 6 5 7 6.70

With TASKUI 8 8 8 8 9 8 9 8 8 8 8.20

D.2.3 Errors

147

Page 165: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Table D.29: Number of errors made to complete Task 1.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Without TASKUI 2 0 0 11 0 2 0 1 0 0 1.60

With TASKUI 0 0 0 0 0 0 0 0 0 0 0.00

Table D.30: Number of errors made to complete Task 2.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

With TASKUI 1 0 0 5 5 5 0 0 5 5 2.60

Without TASKUI 0 0 3 0 0 0 0 0 0 0 0.30

Table D.31: Number of errors made to complete Task 3.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

With TASKUI 0 4 0 0 5 2 0 2 2 1 1.60

Without TASKUI 2 1 0 0 2 0 0 0 0 0 0.50

Table D.32: Number of errors made to complete Task 4.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

With TASKUI 1 3 0 0 0 0 0 0 0 3 0.70

Without TASKUI 1 0 0 0 0 0 0 0 0 0 0.10

D.2.4 Helps

Table D.33: Number of helps taken to complete Task 1.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

With TASKUI 0 0 0 0 0 0 0 1 1 0 0.20

Without TASKUI 0 0 0 0 0 0 0 0 0 0 0.00

Table D.34: Number of helps taken to complete Task 2.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

With TASKUI 0 0 0 1 1 1 0 0 1 1 0.50

Without TASKUI 0 0 1 2 1 0 0 0 0 0 0.40

148

Page 166: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Table D.35: Number of helps taken to complete Task 3.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

With TASKUI 1 3 1 1 1 1 1 1 1 2 1.30

Without TASKUI 1 1 0 0 1 1 0 1 1 0 0.60

Table D.36: Number of helps taken to complete Task 4.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

With TASKUI 2 0 0 0 0 0 0 0 0 3 0.50

Without TASKUI 1 0 0 0 0 0 0 0 0 0 0.10

149

Page 167: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Appendix EQuestionnaire Data: Participants’Responses to Questionnaire 1

Note that: 1=Strongly disagree, 2=Disagree, 3=Neutral, 4=Agree, 5=Strongly agree.

Table E.1: Participants’s experience without TASKUI.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Statement 1 2 2 3 4 3 4 3 4 3 4 3.2

Statement 2 3 2 3 4 4 4 4 4 3 4 3.5

Statement 3 2 3 2 4 4 5 3 4 2 4 3.3

Statement 4 3 2 2 4 4 5 4 4 3 3 3.4

Statement 5 3 2 2 3 3 4 4 4 2 3 3.0

Table E.2: Participants’s experience with TASKUI.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Statement 1 4 4 4 5 4 4 4 5 5 4 4.3

Statement 2 4 5 4 5 4 4 4 5 5 5 4.5

Statement 3 4 5 3 5 4 5 4 5 5 4 4.4

Statement 4 4 5 4 4 5 5 4 5 5 5 4.6

Statement 5 3 4 3 4 4 4 4 5 4 4 3.9

Statement 6 4 4 4 3 4 5 5 3 4 5 4.1

Statement 7 2 3 2 1 2 2 2 3 4 4 2.5

Statement 8 2 1 1 2 2 2 2 2 2 3 1.9

Statement 9 3 4 4 3 5 3 4 4 4 4 3.8

Statement 10 3 1 1 3 1 2 2 2 2 3 2.0

150

Page 168: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Statement 11 4 4 4 4 5 5 4 5 4 4 4.3

Statement 12 3 2 2 1 1 2 2 2 4 4 2.3

Statement 13 2 2 1 2 1 2 2 1 1 2 1.6

151

Page 169: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Appendix FQuestionnaire Data: Participants’Responses to Questionnaire 3

Table F.1: Scenario #1.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Task 1 2 2 2 3 1 2 1 3 1 1 1.80

Task 2 3 n/a 4 2 4 4 3 4 2 4 3.33

Task 3 1 1 1 4 2 1 2 2 3 2 1.90

Task 4 4 3 3 1 3 3 4 1 4 3 2.90

Table F.2: Scenario #2.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Task 1 1 1 1 1 1 1 1 1 1 1 1.00

Task 2 2 2 2 n/a 3 2 2 2 2 3 2.22

Task 3 3 4 4 n/a 2 4 4 3 4 2 3.33

Task 4 n/a 5 3 n/a 5 3 5 5 3 n/a 4.14

Task 5 4 3 5 n/a 4 5 3 4 5 4 4.11

Table F.3: Scenario #3.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Task 1 1 1 1 1 1 1 1 1 1 1 1.00

Task 2 2 2 3 3 2 2 2 2 3 2 2.30

Task 3 3 3 2 2 3 3 3 3 2 3 2.70

152

Page 170: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Task 4 n/a n/a 4 n/a 4 5 4 n/a 4 4 4.17

Task 5 n/a n/a 5 n/a 5 4 5 n/a 5 n/a 4.80

Table F.4: Scenario #4.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Task 1 1 2 1 1 2 3 3 1 1 1 1.60

Task 2 2 3 2 2 3 4 4 2 2 2 2.60

Task 3 3 4 3 3 4 1 2 3 3 4 3.00

Task 4 n/a 1 4 5 1 5 1 4 4 3 3.11

Task 5 n/a n/a 5 4 n/a 2 5 n/a 5 n/a 4.20

Table F.5: Scenario #5.

Participant #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 Average

Task 1 3 4 1 2 2 3 3 1 2 2 2.30

Task 2 2 3 2 3 3 4 4 2 1 3 2.70

Task 3 1 1 5 4 4 2 2 3 3 n/a 2.78

Task 4 n/a 2 3 5 5 5 5 4 5 n/a 4.25

Task 5 4 n/a 4 1 1 1 1 5 4 1 2.44

153

Page 171: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Bibliography

[1] M. Satyanarayanan. Pervasive computing: vision and challenges. Personal

Communications, IEEE, 8(4):10–17, 2001.

[2] M. Weiser. The computer for the 21st century. Scientific American, 3(265):94–

104, 1991.

[3] Consumer Electronics Association. Home theater opportunities cea market

research report, Sept. 2006.

[4] Donald A. Norman. The way i see it simplicity is not the answer. interactions,

15(5):45–46, 2008.

[5] Henry Lieberman and Jose Espinosa. A goal-oriented interface to consumer

electronics using planning and commonsense reasoning. Know.-Based Syst.,

20(6):592–606, 2007.

[6] C. Rich. Building task-based user interfaces with ANSI/CEA-2018. Computer,

42(8):20–27, 2009.

[7] E. Den Ouden. Developments of a Design Analysis Model for Consumer Com-

plaints: revealing a new class of quality failures. PhD thesis, Technische Univer-

siteit Eindhoven, 2006.

[8] Maria Cristina Brugnoli, John Hamard, and Enrico Rukzio. User expecta-

tions for simple mobile ubiquitous computing environments. In WMCS ’05:

154

Page 172: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

Proceedings of the Second IEEE International Workshop on Mobile Commerce and

Services, pages 2–10, Washington, DC, USA, 2005. IEEE Computer Society.

[9] Gene Golovchinsky, Pernilla Qvarfordt, Bill van Melle, Scott Carter, and Tony

Dunnigan. Dice: designing conference rooms for usability. In CHI ’09: Pro-

ceedings of the 27th international conference on Human factors in computing sys-

tems, pages 1015–1024, New York, NY, USA, 2009. ACM.

[10] A. J. Bernheim Brush, Bongshin Lee, Ratul Mahajan, Sharad Agarwal, Ste-

fan Saroiu, and Colin Dixon. Home automation in the wild: Challenges and

opportunities. In CHI 2011, 2011.

[11] T. Heider and T. Kirste. Supporting goal-based interaction with dynamic in-

telligent environments. In Proceedings of European Conference on Artificial Intel-

ligence, pages 596–600. IOS Press, 2002.

[12] F. Paterno. Handbook of Software Engineering & Knowledge Engineering, chapter

Task Models in Interactive Software Systems. World Scientific Publishing Co.,

2001.

[13] B. Schilit, N. Adams, and R. Want. Context-aware computing applications.

Proceedings of Workshop on Mobile Computing Systems and Applications, pages

85–90, 1994.

[14] A. K. Dey, G. D. Abowd, and D. Salber. A conceptual framework and a toolkit

for supporting the rapid prototyping of context-aware applications. Human-

Computer Interaction, 16(2):97–166, 2001.

[15] Charles L. Isbell, Jr., Olufisayo Omojokun, and Jeffrey S. Pierce. From de-

vices to tasks: automatic task prediction for personalized appliance control.

Personal Ubiquitous Comput., 8(3-4):146–153, 2004.

[16] D. Cook, M. Youngblood, and S. Das. A multi-agent approach to controlling

a smart environment. Designing smart homes, pages 165–182, 2006.

155

Page 173: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

[17] Krzysztof Gajos, Harold Fox, and Howard Shrobe. End user empowerment

in human centered pervasive computing. In Proceedings of the First Interna-

tional Conference on Pervasive Computing (Short paper), pages 134–140, Zurich,

August 2002.

[18] Anind K. Dey, Raffay Hamid, Chris Beckmann, Ian Li, and Daniel Hsu. A

CAPpella: Programming by demonstration of context-aware applications. In

Proceedings of the SIGCHI conference on Human factors in computing systems,

pages 33–40. ACM, 2004.

[19] M Ringwald. Ubicontrol: providing new and easy ways to interact with var-

ious consumer devices. In Adjunct Proceedings of Ubicomp 2002, 2002.

[20] J. F. Quesada, F. Garcia, E. Sena, J. A. Bernal, and G. Amores. Dialogue man-

agement in a home machine environment: Linguistic components over an

agent architecture. SEPLN, (27):89–98, September 2001.

[21] K. F. Eustice, T.J. Lehman, A. Morales, M.C. Munson, S. Edlund, and

M. Guillen. A universal information appliance. IBM Systems Journal,

38(4):575–601, 1999.

[22] Jeffrey Nichols, Brad A. Myers, Michael Higgins, Joseph Hughes, Thomas K.

Harris, Roni Rosenfeld, and Mathilde Pignol. Generating remote control in-

terfaces for complex appliances. In UIST ’02: Proceedings of the 15th annual

ACM symposium on User interface software and technology, pages 161–170, New

York, NY, USA, 2002. ACM.

[23] Z. Wang and D. Garlan. Task-driven computing. Technical report, School of

Computer Science, Carnegie Mellon University, 2000.

[24] R. Masuoka, B. Parsia, and Y. Labrou. Task computing–The semantic web

meets pervasive computing. In Proceedings of the Second International Semantic

Web Conference, Lecture Notes in Computer Science, pages 866–881, Florida,

USA, 2003. Springer Berlin Heidelberg.

156

Page 174: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

[25] S.W. Loke. Building taskable spaces over ubiquitous services. IEEE Pervasive

Computing, 8(4):72–78, 2009.

[26] D. Garlan, D.P. Siewiorek, A. Smailagic, and P. Steenkiste. Project Aura:

Toward distraction-free pervasive computing. IEEE Pervasive Computing,

1(2):22–31, 2002.

[27] G Pan, Y Xu, Z Wu, L Yang, M Lin, and S Li. Task follow-me: Towards

seamless task migration across smart environments. Intelligent Systems, IEEE,

2010.

[28] A Ranganathan. A Task Execution Framework for Autonomic Ubiquitous Comput-

ing. PhD thesis, University of Illinois at Urbana-Champaign, 2005.

[29] F. Paterno, C. Mancini, and S. Meniconi. Concurtasktrees: A diagrammatic

notation for specifying task models. In INTERACT ’97, pages 362–369, Lon-

don, UK, 1997. Chapman & Hall, Ltd.

[30] Marc Abrams, Constantinos Phanouriou, Alan L. Batongbacal, Stephen M.

Williams, and Jonathan E. Shuster. UIML: an appliance-independent XML

user interface language. In WWW ’99: Proceedings of the eighth international

conference on World Wide Web, pages 1695–1708, New York, NY, USA, 1999.

Elsevier North-Holland, Inc.

[31] Jeffrey Nichols and Brad A. Myers. Creating a lightweight user interface de-

scription language: An overview and analysis of the personal universal con-

troller project. ACM Trans. Comput.-Hum. Interact., 16(4):1–37, 2009.

[32] Kyung-Lang Park, Uram H. Yoon, and Shin-Dug Kim. Personalized service

discovery in ubiquitous computing environments. IEEE Pervasive Computing,

8(1):58–65, 2009.

[33] Sonia Ben Mokhtar, Nikolaos Georgantas, and Valerie Issarny. Ad hoc com-

position of user tasks in pervasive computing environments. Software Compo-

sition, pages 31–46, 2005.

157

Page 175: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

[34] R. Levinson. The planning and execution assistant and trainer (PEAT). Journal

of Head Trauma Rehabilitation, 12:85–91, 1997.

[35] Marc Weiser. The world is not a desktop. Interactions, 1(1):7–8, 1994.

[36] Joo-Ho Lee and Hideki Hashimoto. Intelligent space – concept and contents.

Advanced Robotics, 16(3):265–280, 2002.

[37] Karen Henricksen, Jadwiga Indulska, and Andry Rakotonirainy. Infrastruc-

ture for pervasive computing: Challenges. In K. Bauknecht, W. Brauer, and

T. Muck, editors, Informatik, pages 214–222, Vienna, Austria, September 2001.

Osterreichische Computer Gesellschaft.

[38] Manuel Roman, Christopher Hess, Renato Cerqueira, Anand Ranganathan,

Roy H. Campbell, and Klara Nahrstedt. A middleware infrastructure for ac-

tive spaces. IEEE Pervasive Computing, 1(4):74–83, 2002.

[39] Gregory Biegel and Vinny Cahill. A framework for developing mobile,

context-aware applications. In PERCOM ’04: Proceedings of the Second IEEE In-

ternational Conference on Pervasive Computing and Communications (PerCom’04),

page 361, Washington, DC, USA, 2004. IEEE Computer Society.

[40] et al. E. Serral. Towards the model driven development of context-aware per-

vasive systems. Pervasive and Mobile Computing, 2009.

[41] Guruduth Banavar, James Beck, Eugene Gluzberg, Jonathan Munson, Jeremy

Sussman, and Deborra Zukowski. Challenges: an application model for per-

vasive computing. In MobiCom ’00: Proceedings of the 6th annual international

conference on Mobile computing and networking, pages 266–274, New York, NY,

USA, 2000. ACM.

[42] Michael Beigl. Point & click - interaction in smart environments. In Proceed-

ings of the 1st international symposium on Handheld and Ubiquitous Computing,

pages 311–313. Springer-Verlag, 1999.

158

Page 176: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

[43] Jukka Riekki, Ivan Sanchez, and Mikko Pyykkonen. Universal remote control

for the smart world. In UIC ’08: Proceedings of the 5th international conference

on Ubiquitous Intelligence and Computing, pages 563–577, Berlin, Heidelberg,

2008. Springer-Verlag.

[44] Jeffrey Nichols, Brad A. Myers, and Brandon Rothrock. Uniform: automat-

ically generating consistent remote control user interfaces. In CHI ’06: Pro-

ceedings of the SIGCHI conference on Human Factors in computing systems, pages

611–620, New York, NY, USA, 2006. ACM.

[45] Jeffrey Nichols and Brad A. Myers. Controlling home and office appliances

with smart phones. IEEE Pervasive Computing, 5(3):60–67, 2006.

[46] A. Yates, O. Etzioni, and D. Weld. A reliable natural language interface to

household appliances. In Proceedings of the 8th international conference on Intel-

ligent user interfaces, pages 189–196. ACM, 2003.

[47] L. Dybkyaer and O. Bernsen. Usability issues in spoken dialogue systems.

Natural Language Engineering, 6:243–271, Aug. 2000.

[48] Gorman P., Dayle R., Hood C. A., and Rumrell L. Effectiveness of the isaac

cognitive prosthetic system for improving rehabilitation outcomes with neu-

rofunctional impairment. NeuroRehabilitation, 18(1):57–67, 2003.

[49] Ned L. Kirsch, Simon P. Levine, Renee Lajiness-O’Neill, and Marjorie Schny-

der. Computer-assisted interactive task guidance: Facilitating the perfor-

mance of a simulated vocational task. Journal of Head Trauma Rehabilitation,

7(3):13–25, Aug. 1992.

[50] Jinks A. and Robson-Brandi C. Designing an interactive prosthetic memory

system. In Sprigle S., editor, Proceedings of the Rehabilitation Engineering Society

of North America (RESNA), pages 526–28, Arlington, Virginia, 1997. RESNA

Press.

[51] Hendrik Schulze. Memos: an interactive assistive system for prospective

memory deficit compensation-architecture and functionality. In Proceedings

159

Page 177: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

of the 6th international ACM SIGACCESS conference on Computers and accessibil-

ity, Assets ’04, pages 79–85, New York, NY, USA, 2004. ACM.

[52] LoPresti E. F., Simpson R. C., Kirsch N., Schreckenghost D., and Hayashi S.

Distributed cognitive aid with scheduling and interactive task guidance. Jour-

nal of Rehabilitation Research and Development, 45:505–521, 2008.

[53] Firby R. J. The RAPS language manual. University of Chicago, Chicago, IL,

1999.

[54] C. Rich, C. Sidner, N. Lesh, A. Garland, S. Booth, and M. Chimani. Diamond-

Help: a collaborative interface framework for networked home appliances. In

25th IEEE International Conference on Distributed Computing Systems Workshops,

pages 514–519, Jun. 2005.

[55] B.J. Grosz and C.L. Sidner. Intentions and Communication, chapter Plans for

discourse, pages 417–444. MIT Press, Cambridge, MA, 1990.

[56] Jose H. Espinosa and Henry Lieberman. Eventnet: Inferring temporal rela-

tions between commonsense events. In MICAI, pages 61–69, 2005.

[57] P.. Singh. The public acquisition of commonsense knowledge. In Proceed-

ings of AAAI Spring Symposium: Acquiring (and Using) Linguistic (and World)

Knowledge for Information Access, Palo Alto, CA, 2002. AAAI.

[58] Alan Messer, Anugeetha Kunjithapatham, Mithun Sheshagiri, Henry Song,

Praveen Kumar, Phuong Nguyen, and Kyoung Hoon Yi. InterPlay: A middle-

ware for seamless device integration and task orchestration in a networked

home. In Proceedings of the Fourth Annual IEEE International Conference on Per-

vasive Computing and Communications, pages 296–307. IEEE Computer Society,

2006.

[59] A. Ranganathan, S. Chetan, J. Al-Muhtadi, R.H.Campbell, and D. Mickunas.

Olympus: A high-level programming model for pervasive computing envi-

ronments. In Third IEEE International Conference on Pervasive Computing and

Communications, pages 7–16, Kauai Island, Hawaii, 2005.

160

Page 178: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

[60] Jeffrey Nichols, Brandon Rothrock, Duen Horng Chau, and Brad A. Myers.

Huddle: Automatically generating interfaces for systems of multiple con-

nected appliances. In Proceedings of the 19th annual ACM symposium on User

interface software and technology, pages 279–288, NY, USA, 2006. ACM.

[61] M. Weiser, R. Gold, and J.S. Brown. The origins of ubiquitous computing

research at PARC in the late 1980s. IBM systems journal, 38(4):693–696, 1999.

[62] J. Spohrer and M. Stein. User experience in the pervasive computing age.

IEEE MultiMedia, 7(1):12–17, jan-mar 2000.

[63] O. Beaudoux and M. Beaudouin-Lafon. DPI: A conceptual model based on

documents and interaction instruments. In Proceedings of the HCI01 Conference

on People and Computers XV, pages 247–263. Spring Verlag, 2001.

[64] Anand Ranganathan and Roy H. Campbell. Supporting tasks in a pro-

grammable smart home. In In ICOST 2005: 3rd International Conference On

Smart Homes and Health Telematic – From Smart Homes to Smart Care, pages 3–

10, Magog, Canada, 2005.

[65] Lawrence D. Bergman, Tatiana Kichkaylo, Guruduth Banavar, and Jeremy B.

Sussman. Pervasive application development and the wysiwyg pitfall. In

EHCI ’01: Proceedings of the 8th IFIP International Conference on Engineering

for Human-Computer Interaction, pages 157–172, London, UK, 2001. Springer-

Verlag.

[66] Patrick Sauter, Gabriel Vogler, Gunther Specht, and Thomas Flor. A model–

view–controller extension for pervasive multi-client user interfaces. Personal

Ubiquitous Comput., 9(2):100–107, 2005.

[67] Heidi Harley. The Cambridge Encyclopedia of Linguistics. Cambridge University

Press, 2007.

[68] Riichiro Mizoguchi, Johan Vanwelkenhuysen, and Mitsuru Ikeda. Task on-

tology for reuse of problem solving knowledge. In Proc. of Knowledge Building

& Knowledge Sharing, pages 46–59, 1995.

161

Page 179: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

[69] Fabio Paterno. Model-based design of interactive applications. Intelligence,

11(4):26–38, Dec. 2000.

[70] M. Weiser. Hot topics-ubiquitous computing. Computer, 26(10):71–72, 1993.

[71] D. J. Carmichael, J. Kay, and B. Kummerfeld. Consistent modelling of users,

devices and sensors in a ubiquitous computing environment. User Modeling

and User-Adapted Interaction, 15(3-4):197–234, 2005.

[72] C.C. Vo, T. Torabi, and S.W. Loke. Towards context-aware task recommenda-

tion. In Proceedings of the 4th International Conference on Pervasive Computing,

pages 289–292, Taiwan, 2009.

[73] D. W. McDonald. Ubiquitous recommendation systems. Computer,

36(10):111–112, 2003.

[74] G. Adomavicius, R. Sankaranarayanan, S. Sen, and A. Tuzhilin. Incorporating

contextual information in recommender systems using a multidimensional

approach. ACM Trans. Inf. Syst., 23(1):103–145, 2005.

[75] G. Adomavicius and A. Tuzhilin. Context-aware recommender systems. In

RecSys ’08: Proceedings of the 2008 ACM conference on Recommender systems,

pages 335–336, New York, NY, USA, 2008. ACM. A Tutorial.

[76] M. Ikeda, K. Seta, O. Kakusho, and Mizoguchi R. Task ontology: ontology

for building conceptual problem solving models. In Proceedings of ECAI98

Workshop on Applications of ontologies and problem-solving model, pages 126–133,

1998.

[77] Anand Kumar, Paolo Ciccarese, Barry Smith, and Matteo Piazza. Context-

based task ontologies for clinical guidelines. In Domenico M. Pisanelli, editor,

Ontologies in Medicine: Proceedings of the Workshop on Medical Ontologies, pages

81–94, Rome, October 2003. IOS Press.

[78] Nancy Wiegand and Cassandra Garca. A task-based ontology approach to

automate geospatial data retrieval. Transactions in GIS, 11(3):355–376, 2007.

162

Page 180: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

[79] M. Sasajima, Y. Kitamura, T. Naganuma, S. Kurakake, and R. Mizoguchi. Task

ontology-based framework for modeling users’ activities for mobile service

navigation. In Proc. of Posters and Demos of ESWC 2006, pages 71–72, Budva,

Montenegro, June 2006.

[80] C. C. Vo, S. W. Loke, T. Torabi, and T. Nguyen. TASKREC: A task-based user

interface for smart spaces. In Proceedings of the 9th International Conference on

Advances in Mobile Computing and Multimedia, pages 223–226, New York, NY,

USA, 2011. ACM.

[81] Nissanka B. Priyantha, Anit Chakraborty, and Hari Balakrishnan. The cricket

location-support system. In Proceedings of the 6th annual international conference

on Mobile computing and networking, pages 32–43, New York, NY, USA, 2000.

ACM.

[82] Tom Yeh, Kristen Grauman, Konrad Tollmar, and Trevor Darrell. A picture is

worth a thousand keywords: Image-based object search on a mobile platform.

In CHI ’05 extended abstracts on Human factors in computing systems, pages 2025–

2028, New York, NY, USA, 2005. ACM.

[83] Eisaku Ohbuchi, Hiroshi Hanaizumi, and Lim Ah Hock. Barcode readers us-

ing the camera device in mobile phones. In Proceedings of the 2004 International

Conference on Cyberworlds, pages 260–265, Washington, DC, USA, 2004. IEEE

Computer Society.

[84] R. Want. An introduction to RFID technology. IEEE Pervasive Computing,

5(1):25–33, 2006.

[85] Natalia Marmasse and Chris Schmandt. Location-aware information delivery

with commotion. In Proceedings of the 2nd international symposium on Handheld

and Ubiquitous Computing, pages 157–171. Springer-Verlag, 2000.

[86] P. J. Brown. The stick-e document: a framework for creating context-aware

applications. In Proceedings of International Conference on Electronic Documents,

1996.

163

Page 181: A Framework for a Task-Oriented User Interaction with ...homepage.cs.latrobe.edu.au/ccvo/papers/Thesis.pdf · A Framework for a Task-Oriented User Interaction with Smart Environments

[87] Yang Li. Gesture search: Random access to smartphone content. IEEE Perva-

sive Computing, 11(1):10–13, 2012.

[88] Gang Pan, Yuqiong Xu, Zhaohui Wu, Laurence Yang, Man Lin, and Shijian Li.

TaskShadow: Toward Seamless Task Migration across Smart Environments.

IEEE Intelligent Systems, 26(3):50–57, May–June 2011.

[89] ISO 9241-11 Ergonomic requirements for office work with visual display ter-

minals (VDTs) - Part 11: Guidance on usability, 1998.

[90] Martina Ziefle and Susanne Bay. How to overcome disorientation in mobile

phone menus: a comparison of two different types of navigation aids. Hum.-

Comput. Interact., 21(4):393–433, November 2006.

[91] J. Brooke. SUS: A quick and dirty usability scale. In P. W. Jordan, B. Weerd-

meester, A. Thomas, and I. L. Mclelland, editors, Usability evaluation in indus-

try. Taylor and Francis, London, 1996.

[92] Boris de Ruyter and Elly Pelgrim. Ambient assisted-living research in carelab.

Interactions, 14(4):30–33, July 2007.

164