standarization actions report - europa · 2017-04-20 · standarization actions report project no....

52
Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description of the standardization actions for Serenoa, starting with a look at standardization opportunities, and then reviewing progress in the W3C MBUI Working Group. 1 Ref. Ares(2012)1074168 - 17/09/2012

Upload: others

Post on 30-Jun-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

Standarization Actions Report

Project no FP7 - 258030

Deliverable D621

Executive SummaryThis document provides a description of the standardization actionsfor Serenoa starting with a look at standardization opportunitiesand then reviewing progress in the W3C MBUI Working Group

1

Ref Ares(2012)1074168 - 17092012

Table of Contents1 Introduction 42 Potential opportunities for standardization 4

21 Task Models 422 Domain Models 523 Abstract UI Models 524 Concrete UI Models 6

241 WIMP (desktop GUI) 7242 Touch-based GUI (smart phones and

tablets) 8243 Vocal UI 9244 Multimodal UI 10245 Industrial UI 11

25 Context of Use 12251 General Considerations 12252 Industry Fulfilment of Safety Guidelines 13253 Automotive Mitigation of Driver

Distraction 1326 Multidimensional Adaptation of Service Front

Ends 14261 CARF Reference Framework 14262 CADS Design Space 15263 CARFO Multidimensional Adaptation

Ontology 1627 Design-time adaptation rules 1628 Run-time adaptation rules 1829 Advanced Adaptation Logic Description Language

(AAL-DL) 19210 Corporate Rules for Consistent User Experience 21

3 W3C Model-Based UI Working Group 2131 Introduction 2132 History 21

321 MBUI Incubator Group 24322 MBUI Workshop 24323 Formation of MBUI Working Group 25

33 MBUI Working Group Charter 2634 MBUI Submissions 29

341 Advanced Service Front-End DescriptionLanguage (ASFE-DL) 29

342 The ConcurTaskTrees Notation (CTT) 30343 Useware Markup Language (UseML) 33344 User Interface Markup Language (UIML) 35345 Abstract Interactor Model (AIM)

Specification 36346 Multimodal Interactor Mapping (MIM)

Model Specification 38

2

347 UsiXML 39348 MARIA 41

35 MBUI WG Note - Introduction to Model-Based UIDesign 47

36 MBUI WG Note - Glossary of Terms 4737 MBUI WG Specification - Task Models for Model-

Based UI Design 4738 MBUI WG Specification - Abstract User Interface

Models 4939 Future Plans 49

4 CoDeMoDIS proposal for a COST Action 505 ISO 24744 standardisation action 516 Conclusions 517 References 52

3

1 IntroductionThis report describes standardization actions for the Serenoaproject and will consider opportunities for standardization currentprogress and future plans Our motivation for work onstandardization is to encourage the development and uptake ofinteroperable tools at both design and run-time for context awaremodel-based user interfaces

The following diagram illustrates the Serenoa Architecture andmany of the components shown will be considered in later sectionsof this report from the perspective of their potential forstandardization

For an introduction to the architecture and the benefits for a rangeof stakeholders you are invited to read the Serenoa White Paper

bull Serenoa White Paper (PDF)

2 Potential opportunities forstandardizationThis section reviews the different areas of work underway in theSerenoa project and provides a brief account of their potential forstandardization

21 Task Models

Task models provide a means for describing the set of tasksinvolved in an interactive system how the tasks decompose into

4

subtasks which tasks are to be carried out by the user the systemor both and the temporal sequence and inter-dependencies oftasks Task models enable the user interaction to be described andreviewed without being distracted by the details of the userinterface As such task models are not intended to be a completedescription

The primary task modeling language in Serenoa isConcurTaskTrees (CTT) This has good prospects forstandardization and would enable interoperable exchange of taskmodels between different user interface design tools See thesection on W3C MBUI Working Group for information on how thisis proceeding

22 Domain Models

The general architecture for Serenoa assumes a clean separationbetween the user interface and the application back-end Theinterface is defined through a domain model with named propertiesand methods Each property can have an atomic value such as aboolean a number or a string Alternatively a property can have astructured value with subsidiary properties and methods Propertyvalues method arguments and return values are described with atype language The domain model may also include a means for thesystem to signal events or exceptions for example anasynchronous change in the context of use or an error in the usersinput A further consideration is whether a method is synchronousor asynchronous ie it takes sufficient time to execute to have anoticeable impact on the user experience

Serenoa has so far avoided defining a separate formal language fordomain models and instead has embedded a limited treatment aspart of the abstract user interface (ASFE-DL) An adequateformalization of domain models will be essential for interoperableinterchange of user interface designs The precise requirementswill depend on the kinds of interactive systems that are beingtargeted

23 Abstract UI Models

In the Serenoa architecture abstract user interface design modelsdescribe interactive systems at a greater level of detail than iscommonly the case for task models but are still independent of thetarget platforms and modes of interaction The ASFE-DL languagecan be loosely described as follows

At the top level the abstract user interface can be described interms of a set of inter-related dialogues Each dialogue has a set of

5

interactors which can be thought of as abstract versions of userinterface controls Each interactor is bound to the domain model aswell as a variety of properties

There is a lot of potential for standardizing an abstract userinterface design language However there are many more suchlanguages than is the case for task models This will make it harderto standardize due to the need to forge bridges between differentcamps through the establishment of common use cases a sharedvocabulary and a synthesis of ideas As such ASFE-DL will be justone input into the standardization process

The list of existing alternatives for AUIs is quite lengthy (Souchonand Vanderdonckt 2003) Next we will provide more detailedinformation regarding the two AUI languages that comprise theconsortiums portfolio of authored and co-authored languages inthis field namely UsiXML and MARIA

The USer Interface EXtensible Markup Language (UsiXML)(Limbourg et al 2005) is an XML-compliant mark-up language todescribe user interfaces for multiple contexts and differentmodalities UsiXML allows also non-developers to use the languageto describe user interfaces mainly because the elements of the UIcan be described at a high level regardless of the platform of useThe UsiXML language was submitted for a standardisation actionplan in the context of the Similar network of excellence and of theOpen Interface European project

MARIA (Model-based language for Interactive Applications)(Paternograve et al 2009) is a universal declarative multipleabstraction-level XML-based language for modelling interactiveapplications in ubiquitous environments For designers of multi-device user interfaces one advantage of using a multi-layerdescription for specifying UIs is that they do not have to learn allthe details of the many possible implementation languagessupported by the various devices but they can reason in abstractterms without being tied to a particular UI modality or even worseimplementation language In this way they can better focus on thesemantics of the interaction namely what the intended goal of theinteraction is regardless of the details and specificities of theparticular environment considered

24 Concrete UI Models

The concrete user interface involves a commitment to a class ofdevice and modes of interaction Some typical examples areexamined in the following subsections There are quite a fewexisting user interface languages at this level of abstraction Some

6

of these are widely deployed proprietary solutions where thevendor may feel little imperative to add support for interoperableinterchange of user interface designs An open standard is likely tohave a tough time in widening its support beyond a relatively smallcommunity of early adopters The larger the community the easierit is to gather the resources needed to create and maintaineffective easy to use tools and documentation This is true for bothopen source and proprietary solutions

Some examples of existing concrete user interface languages

bull UIML - early example of a user interface markup languagebull MXML - introduced by Macromedia for compilation into

Flash SWFbull XUL - introduced by Mozilla Foundation for the Gecko enginebull XAML - introduced by Microsoft for use with their NET

frameworkbull OpenLazlo (LZX) - introduced by Lazlo Systems for their

presentation serverbull MARIA - developed by ISTI-CNR and combining abstract

and concrete UIbull XForms - developed by W3C for rich forms interfaces

241 WIMP (desktop GUI)

The abbreviation WIMP stands for windows icons menuspointer and describes the kind of graphical user interfacecommon on desktop computers running operating systems such asMicrosoft Windows MacOS and Linux + X Windows WIMP userinterfaces were originally developed by Xerox in the earlyseventies but came to popular attention through the AppleMacintosh in the mid-eighties and later Microsoft Windows Aconcrete user interface modelling language for WIMP platformscan build upon a wealth of experience Some examples of commonfeatures include

bull scroll-able windows inline and pop-up dialoguesbull click double click drag and drop idiomsbull window minimization maximization and close buttonsbull icons for minimized applications and as clickable buttonsbull tab controls for groups of related panesbull control bars with subsidiary controlsbull drop down menus and combo boxesbull Keyboard short cuts as alternatives to using the mouse

trackpadbull single and multi-line text boxesbull captioned radio buttonsbull captioned check boxes

7

bull updown spinnersbull buttons with text and icons as captionsbull named boxes for grouping related controlsbull a variety of layout policies eg absolute horizontal vertical

grid and table layouts

Graphical editors for creating WIMP user interfaces typicallyconsist of a palette of controls that can be dragged on to a canvasOnce there each control has a set of associated properties that youcan update through a property sheet These can be used to attachthe desired behaviour and it is common to define this with ascripting language that bridges the user interface controls and theapplication back-end

One challenge for WIMP user interfaces is adapting to varyingwindow sizes and resolutions To some extent this can be addressedthrough layout policies that make the best use of the availablespace The end user may be able to vary the font size Scrollablewindows make it possible to view a large window in a smallerscreen area However large changes in window size and resolutioncall for more drastic adaptations and one way to address this viasplitting the user interface design into multiple concrete userinterface models aimed at different sizes of window

242 Touch-based GUI (smart phones and tablets)

In the last few years there has been a rapid deployment of phonesand tablets featuring a high resolution colour screen with a multi-touch sensor Touch-based devices typically lack traditionalkeyboards and have given rise to a new set of user interfacedesign patterns Some common features include

bull tap double tap long tap drag and dropbull two finger pinch stretch and zoombull swipe to panbull single rather than multiple windowsbull background servicesbull pop-up notificationsbull icons for launching applicationsbull suspend and resume semantics for applicationsbull orientation sensing and portraitlandscape adaptationbull ambient light level sensingbull proximity sensingbull GPS-based location sensingbull wide variety of display resolutionsbull Bluetooth USB and NFC interfacesbull variations in support for Web standards especially scripting

APIs

8

Further study is needed to see just how practical it is to define andstandardize a common concrete user interface language fordifferent touch-based platforms such as Apples iOS and GooglesAndroid Variations across devices create significant challenges fordevelopers although some of this can be hidden through the use oflibraries

243 Vocal UI

Vocal user interfaces are commonly used by automated call centresto provide service that customers can access by phone using theirvoice and the phones key pad Vocal interfaces have to be designedto cope with errors in speech recognition and ungrammatical orout of domain responses by users Simple vocal interfaces directthe user to respond in narrow and predictable ways that can becharacterized by a speech grammar Errors can be handled viarepeating or rephrasing the prompt or by giving users the choiceof using the key pad Some relevant existing W3C specificationsare

bull Voice Extensible Markup Language (VoiceXML)bull Speech Recognition Grammar Specification (SRGS)bull Semantic Interpretation for Speech Recognition (SISR)bull Speech Synthesis Mark Language (SSML)bull Pronunciation Lexicon Specification (PLS)bull Emotion Markup Language (EmotionML)bull Voice Browser Call Control (CCXML)bull State Chart XML (SCXML)

VoiceXML is similar in some respects to the Hypertext MarkupLanguage (HTML) in its use of links and forms VoiceXML alsoprovides support for spoken dialogues in terms of error handlingand the use of complementary languages such as SRGS for speechgrammars and SSML for control of speech synthesis andprerecorded speech

The Serenoa framework can be applied to vocal interfacesdescribed in VoiceXML where the the speech grammars can bereadily derived This is the case for applications involvingnavigation through a tree of menus where the user is directed torepeat one of the choices given in a prompt or to tap the key padwith the number of the choice eg

M Do you want news sports or weatherU weatherM the weather today will be cold and windy with a chance of rain

9

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 2: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

Table of Contents1 Introduction 42 Potential opportunities for standardization 4

21 Task Models 422 Domain Models 523 Abstract UI Models 524 Concrete UI Models 6

241 WIMP (desktop GUI) 7242 Touch-based GUI (smart phones and

tablets) 8243 Vocal UI 9244 Multimodal UI 10245 Industrial UI 11

25 Context of Use 12251 General Considerations 12252 Industry Fulfilment of Safety Guidelines 13253 Automotive Mitigation of Driver

Distraction 1326 Multidimensional Adaptation of Service Front

Ends 14261 CARF Reference Framework 14262 CADS Design Space 15263 CARFO Multidimensional Adaptation

Ontology 1627 Design-time adaptation rules 1628 Run-time adaptation rules 1829 Advanced Adaptation Logic Description Language

(AAL-DL) 19210 Corporate Rules for Consistent User Experience 21

3 W3C Model-Based UI Working Group 2131 Introduction 2132 History 21

321 MBUI Incubator Group 24322 MBUI Workshop 24323 Formation of MBUI Working Group 25

33 MBUI Working Group Charter 2634 MBUI Submissions 29

341 Advanced Service Front-End DescriptionLanguage (ASFE-DL) 29

342 The ConcurTaskTrees Notation (CTT) 30343 Useware Markup Language (UseML) 33344 User Interface Markup Language (UIML) 35345 Abstract Interactor Model (AIM)

Specification 36346 Multimodal Interactor Mapping (MIM)

Model Specification 38

2

347 UsiXML 39348 MARIA 41

35 MBUI WG Note - Introduction to Model-Based UIDesign 47

36 MBUI WG Note - Glossary of Terms 4737 MBUI WG Specification - Task Models for Model-

Based UI Design 4738 MBUI WG Specification - Abstract User Interface

Models 4939 Future Plans 49

4 CoDeMoDIS proposal for a COST Action 505 ISO 24744 standardisation action 516 Conclusions 517 References 52

3

1 IntroductionThis report describes standardization actions for the Serenoaproject and will consider opportunities for standardization currentprogress and future plans Our motivation for work onstandardization is to encourage the development and uptake ofinteroperable tools at both design and run-time for context awaremodel-based user interfaces

The following diagram illustrates the Serenoa Architecture andmany of the components shown will be considered in later sectionsof this report from the perspective of their potential forstandardization

For an introduction to the architecture and the benefits for a rangeof stakeholders you are invited to read the Serenoa White Paper

bull Serenoa White Paper (PDF)

2 Potential opportunities forstandardizationThis section reviews the different areas of work underway in theSerenoa project and provides a brief account of their potential forstandardization

21 Task Models

Task models provide a means for describing the set of tasksinvolved in an interactive system how the tasks decompose into

4

subtasks which tasks are to be carried out by the user the systemor both and the temporal sequence and inter-dependencies oftasks Task models enable the user interaction to be described andreviewed without being distracted by the details of the userinterface As such task models are not intended to be a completedescription

The primary task modeling language in Serenoa isConcurTaskTrees (CTT) This has good prospects forstandardization and would enable interoperable exchange of taskmodels between different user interface design tools See thesection on W3C MBUI Working Group for information on how thisis proceeding

22 Domain Models

The general architecture for Serenoa assumes a clean separationbetween the user interface and the application back-end Theinterface is defined through a domain model with named propertiesand methods Each property can have an atomic value such as aboolean a number or a string Alternatively a property can have astructured value with subsidiary properties and methods Propertyvalues method arguments and return values are described with atype language The domain model may also include a means for thesystem to signal events or exceptions for example anasynchronous change in the context of use or an error in the usersinput A further consideration is whether a method is synchronousor asynchronous ie it takes sufficient time to execute to have anoticeable impact on the user experience

Serenoa has so far avoided defining a separate formal language fordomain models and instead has embedded a limited treatment aspart of the abstract user interface (ASFE-DL) An adequateformalization of domain models will be essential for interoperableinterchange of user interface designs The precise requirementswill depend on the kinds of interactive systems that are beingtargeted

23 Abstract UI Models

In the Serenoa architecture abstract user interface design modelsdescribe interactive systems at a greater level of detail than iscommonly the case for task models but are still independent of thetarget platforms and modes of interaction The ASFE-DL languagecan be loosely described as follows

At the top level the abstract user interface can be described interms of a set of inter-related dialogues Each dialogue has a set of

5

interactors which can be thought of as abstract versions of userinterface controls Each interactor is bound to the domain model aswell as a variety of properties

There is a lot of potential for standardizing an abstract userinterface design language However there are many more suchlanguages than is the case for task models This will make it harderto standardize due to the need to forge bridges between differentcamps through the establishment of common use cases a sharedvocabulary and a synthesis of ideas As such ASFE-DL will be justone input into the standardization process

The list of existing alternatives for AUIs is quite lengthy (Souchonand Vanderdonckt 2003) Next we will provide more detailedinformation regarding the two AUI languages that comprise theconsortiums portfolio of authored and co-authored languages inthis field namely UsiXML and MARIA

The USer Interface EXtensible Markup Language (UsiXML)(Limbourg et al 2005) is an XML-compliant mark-up language todescribe user interfaces for multiple contexts and differentmodalities UsiXML allows also non-developers to use the languageto describe user interfaces mainly because the elements of the UIcan be described at a high level regardless of the platform of useThe UsiXML language was submitted for a standardisation actionplan in the context of the Similar network of excellence and of theOpen Interface European project

MARIA (Model-based language for Interactive Applications)(Paternograve et al 2009) is a universal declarative multipleabstraction-level XML-based language for modelling interactiveapplications in ubiquitous environments For designers of multi-device user interfaces one advantage of using a multi-layerdescription for specifying UIs is that they do not have to learn allthe details of the many possible implementation languagessupported by the various devices but they can reason in abstractterms without being tied to a particular UI modality or even worseimplementation language In this way they can better focus on thesemantics of the interaction namely what the intended goal of theinteraction is regardless of the details and specificities of theparticular environment considered

24 Concrete UI Models

The concrete user interface involves a commitment to a class ofdevice and modes of interaction Some typical examples areexamined in the following subsections There are quite a fewexisting user interface languages at this level of abstraction Some

6

of these are widely deployed proprietary solutions where thevendor may feel little imperative to add support for interoperableinterchange of user interface designs An open standard is likely tohave a tough time in widening its support beyond a relatively smallcommunity of early adopters The larger the community the easierit is to gather the resources needed to create and maintaineffective easy to use tools and documentation This is true for bothopen source and proprietary solutions

Some examples of existing concrete user interface languages

bull UIML - early example of a user interface markup languagebull MXML - introduced by Macromedia for compilation into

Flash SWFbull XUL - introduced by Mozilla Foundation for the Gecko enginebull XAML - introduced by Microsoft for use with their NET

frameworkbull OpenLazlo (LZX) - introduced by Lazlo Systems for their

presentation serverbull MARIA - developed by ISTI-CNR and combining abstract

and concrete UIbull XForms - developed by W3C for rich forms interfaces

241 WIMP (desktop GUI)

The abbreviation WIMP stands for windows icons menuspointer and describes the kind of graphical user interfacecommon on desktop computers running operating systems such asMicrosoft Windows MacOS and Linux + X Windows WIMP userinterfaces were originally developed by Xerox in the earlyseventies but came to popular attention through the AppleMacintosh in the mid-eighties and later Microsoft Windows Aconcrete user interface modelling language for WIMP platformscan build upon a wealth of experience Some examples of commonfeatures include

bull scroll-able windows inline and pop-up dialoguesbull click double click drag and drop idiomsbull window minimization maximization and close buttonsbull icons for minimized applications and as clickable buttonsbull tab controls for groups of related panesbull control bars with subsidiary controlsbull drop down menus and combo boxesbull Keyboard short cuts as alternatives to using the mouse

trackpadbull single and multi-line text boxesbull captioned radio buttonsbull captioned check boxes

7

bull updown spinnersbull buttons with text and icons as captionsbull named boxes for grouping related controlsbull a variety of layout policies eg absolute horizontal vertical

grid and table layouts

Graphical editors for creating WIMP user interfaces typicallyconsist of a palette of controls that can be dragged on to a canvasOnce there each control has a set of associated properties that youcan update through a property sheet These can be used to attachthe desired behaviour and it is common to define this with ascripting language that bridges the user interface controls and theapplication back-end

One challenge for WIMP user interfaces is adapting to varyingwindow sizes and resolutions To some extent this can be addressedthrough layout policies that make the best use of the availablespace The end user may be able to vary the font size Scrollablewindows make it possible to view a large window in a smallerscreen area However large changes in window size and resolutioncall for more drastic adaptations and one way to address this viasplitting the user interface design into multiple concrete userinterface models aimed at different sizes of window

242 Touch-based GUI (smart phones and tablets)

In the last few years there has been a rapid deployment of phonesand tablets featuring a high resolution colour screen with a multi-touch sensor Touch-based devices typically lack traditionalkeyboards and have given rise to a new set of user interfacedesign patterns Some common features include

bull tap double tap long tap drag and dropbull two finger pinch stretch and zoombull swipe to panbull single rather than multiple windowsbull background servicesbull pop-up notificationsbull icons for launching applicationsbull suspend and resume semantics for applicationsbull orientation sensing and portraitlandscape adaptationbull ambient light level sensingbull proximity sensingbull GPS-based location sensingbull wide variety of display resolutionsbull Bluetooth USB and NFC interfacesbull variations in support for Web standards especially scripting

APIs

8

Further study is needed to see just how practical it is to define andstandardize a common concrete user interface language fordifferent touch-based platforms such as Apples iOS and GooglesAndroid Variations across devices create significant challenges fordevelopers although some of this can be hidden through the use oflibraries

243 Vocal UI

Vocal user interfaces are commonly used by automated call centresto provide service that customers can access by phone using theirvoice and the phones key pad Vocal interfaces have to be designedto cope with errors in speech recognition and ungrammatical orout of domain responses by users Simple vocal interfaces directthe user to respond in narrow and predictable ways that can becharacterized by a speech grammar Errors can be handled viarepeating or rephrasing the prompt or by giving users the choiceof using the key pad Some relevant existing W3C specificationsare

bull Voice Extensible Markup Language (VoiceXML)bull Speech Recognition Grammar Specification (SRGS)bull Semantic Interpretation for Speech Recognition (SISR)bull Speech Synthesis Mark Language (SSML)bull Pronunciation Lexicon Specification (PLS)bull Emotion Markup Language (EmotionML)bull Voice Browser Call Control (CCXML)bull State Chart XML (SCXML)

VoiceXML is similar in some respects to the Hypertext MarkupLanguage (HTML) in its use of links and forms VoiceXML alsoprovides support for spoken dialogues in terms of error handlingand the use of complementary languages such as SRGS for speechgrammars and SSML for control of speech synthesis andprerecorded speech

The Serenoa framework can be applied to vocal interfacesdescribed in VoiceXML where the the speech grammars can bereadily derived This is the case for applications involvingnavigation through a tree of menus where the user is directed torepeat one of the choices given in a prompt or to tap the key padwith the number of the choice eg

M Do you want news sports or weatherU weatherM the weather today will be cold and windy with a chance of rain

9

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 3: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

347 UsiXML 39348 MARIA 41

35 MBUI WG Note - Introduction to Model-Based UIDesign 47

36 MBUI WG Note - Glossary of Terms 4737 MBUI WG Specification - Task Models for Model-

Based UI Design 4738 MBUI WG Specification - Abstract User Interface

Models 4939 Future Plans 49

4 CoDeMoDIS proposal for a COST Action 505 ISO 24744 standardisation action 516 Conclusions 517 References 52

3

1 IntroductionThis report describes standardization actions for the Serenoaproject and will consider opportunities for standardization currentprogress and future plans Our motivation for work onstandardization is to encourage the development and uptake ofinteroperable tools at both design and run-time for context awaremodel-based user interfaces

The following diagram illustrates the Serenoa Architecture andmany of the components shown will be considered in later sectionsof this report from the perspective of their potential forstandardization

For an introduction to the architecture and the benefits for a rangeof stakeholders you are invited to read the Serenoa White Paper

bull Serenoa White Paper (PDF)

2 Potential opportunities forstandardizationThis section reviews the different areas of work underway in theSerenoa project and provides a brief account of their potential forstandardization

21 Task Models

Task models provide a means for describing the set of tasksinvolved in an interactive system how the tasks decompose into

4

subtasks which tasks are to be carried out by the user the systemor both and the temporal sequence and inter-dependencies oftasks Task models enable the user interaction to be described andreviewed without being distracted by the details of the userinterface As such task models are not intended to be a completedescription

The primary task modeling language in Serenoa isConcurTaskTrees (CTT) This has good prospects forstandardization and would enable interoperable exchange of taskmodels between different user interface design tools See thesection on W3C MBUI Working Group for information on how thisis proceeding

22 Domain Models

The general architecture for Serenoa assumes a clean separationbetween the user interface and the application back-end Theinterface is defined through a domain model with named propertiesand methods Each property can have an atomic value such as aboolean a number or a string Alternatively a property can have astructured value with subsidiary properties and methods Propertyvalues method arguments and return values are described with atype language The domain model may also include a means for thesystem to signal events or exceptions for example anasynchronous change in the context of use or an error in the usersinput A further consideration is whether a method is synchronousor asynchronous ie it takes sufficient time to execute to have anoticeable impact on the user experience

Serenoa has so far avoided defining a separate formal language fordomain models and instead has embedded a limited treatment aspart of the abstract user interface (ASFE-DL) An adequateformalization of domain models will be essential for interoperableinterchange of user interface designs The precise requirementswill depend on the kinds of interactive systems that are beingtargeted

23 Abstract UI Models

In the Serenoa architecture abstract user interface design modelsdescribe interactive systems at a greater level of detail than iscommonly the case for task models but are still independent of thetarget platforms and modes of interaction The ASFE-DL languagecan be loosely described as follows

At the top level the abstract user interface can be described interms of a set of inter-related dialogues Each dialogue has a set of

5

interactors which can be thought of as abstract versions of userinterface controls Each interactor is bound to the domain model aswell as a variety of properties

There is a lot of potential for standardizing an abstract userinterface design language However there are many more suchlanguages than is the case for task models This will make it harderto standardize due to the need to forge bridges between differentcamps through the establishment of common use cases a sharedvocabulary and a synthesis of ideas As such ASFE-DL will be justone input into the standardization process

The list of existing alternatives for AUIs is quite lengthy (Souchonand Vanderdonckt 2003) Next we will provide more detailedinformation regarding the two AUI languages that comprise theconsortiums portfolio of authored and co-authored languages inthis field namely UsiXML and MARIA

The USer Interface EXtensible Markup Language (UsiXML)(Limbourg et al 2005) is an XML-compliant mark-up language todescribe user interfaces for multiple contexts and differentmodalities UsiXML allows also non-developers to use the languageto describe user interfaces mainly because the elements of the UIcan be described at a high level regardless of the platform of useThe UsiXML language was submitted for a standardisation actionplan in the context of the Similar network of excellence and of theOpen Interface European project

MARIA (Model-based language for Interactive Applications)(Paternograve et al 2009) is a universal declarative multipleabstraction-level XML-based language for modelling interactiveapplications in ubiquitous environments For designers of multi-device user interfaces one advantage of using a multi-layerdescription for specifying UIs is that they do not have to learn allthe details of the many possible implementation languagessupported by the various devices but they can reason in abstractterms without being tied to a particular UI modality or even worseimplementation language In this way they can better focus on thesemantics of the interaction namely what the intended goal of theinteraction is regardless of the details and specificities of theparticular environment considered

24 Concrete UI Models

The concrete user interface involves a commitment to a class ofdevice and modes of interaction Some typical examples areexamined in the following subsections There are quite a fewexisting user interface languages at this level of abstraction Some

6

of these are widely deployed proprietary solutions where thevendor may feel little imperative to add support for interoperableinterchange of user interface designs An open standard is likely tohave a tough time in widening its support beyond a relatively smallcommunity of early adopters The larger the community the easierit is to gather the resources needed to create and maintaineffective easy to use tools and documentation This is true for bothopen source and proprietary solutions

Some examples of existing concrete user interface languages

bull UIML - early example of a user interface markup languagebull MXML - introduced by Macromedia for compilation into

Flash SWFbull XUL - introduced by Mozilla Foundation for the Gecko enginebull XAML - introduced by Microsoft for use with their NET

frameworkbull OpenLazlo (LZX) - introduced by Lazlo Systems for their

presentation serverbull MARIA - developed by ISTI-CNR and combining abstract

and concrete UIbull XForms - developed by W3C for rich forms interfaces

241 WIMP (desktop GUI)

The abbreviation WIMP stands for windows icons menuspointer and describes the kind of graphical user interfacecommon on desktop computers running operating systems such asMicrosoft Windows MacOS and Linux + X Windows WIMP userinterfaces were originally developed by Xerox in the earlyseventies but came to popular attention through the AppleMacintosh in the mid-eighties and later Microsoft Windows Aconcrete user interface modelling language for WIMP platformscan build upon a wealth of experience Some examples of commonfeatures include

bull scroll-able windows inline and pop-up dialoguesbull click double click drag and drop idiomsbull window minimization maximization and close buttonsbull icons for minimized applications and as clickable buttonsbull tab controls for groups of related panesbull control bars with subsidiary controlsbull drop down menus and combo boxesbull Keyboard short cuts as alternatives to using the mouse

trackpadbull single and multi-line text boxesbull captioned radio buttonsbull captioned check boxes

7

bull updown spinnersbull buttons with text and icons as captionsbull named boxes for grouping related controlsbull a variety of layout policies eg absolute horizontal vertical

grid and table layouts

Graphical editors for creating WIMP user interfaces typicallyconsist of a palette of controls that can be dragged on to a canvasOnce there each control has a set of associated properties that youcan update through a property sheet These can be used to attachthe desired behaviour and it is common to define this with ascripting language that bridges the user interface controls and theapplication back-end

One challenge for WIMP user interfaces is adapting to varyingwindow sizes and resolutions To some extent this can be addressedthrough layout policies that make the best use of the availablespace The end user may be able to vary the font size Scrollablewindows make it possible to view a large window in a smallerscreen area However large changes in window size and resolutioncall for more drastic adaptations and one way to address this viasplitting the user interface design into multiple concrete userinterface models aimed at different sizes of window

242 Touch-based GUI (smart phones and tablets)

In the last few years there has been a rapid deployment of phonesand tablets featuring a high resolution colour screen with a multi-touch sensor Touch-based devices typically lack traditionalkeyboards and have given rise to a new set of user interfacedesign patterns Some common features include

bull tap double tap long tap drag and dropbull two finger pinch stretch and zoombull swipe to panbull single rather than multiple windowsbull background servicesbull pop-up notificationsbull icons for launching applicationsbull suspend and resume semantics for applicationsbull orientation sensing and portraitlandscape adaptationbull ambient light level sensingbull proximity sensingbull GPS-based location sensingbull wide variety of display resolutionsbull Bluetooth USB and NFC interfacesbull variations in support for Web standards especially scripting

APIs

8

Further study is needed to see just how practical it is to define andstandardize a common concrete user interface language fordifferent touch-based platforms such as Apples iOS and GooglesAndroid Variations across devices create significant challenges fordevelopers although some of this can be hidden through the use oflibraries

243 Vocal UI

Vocal user interfaces are commonly used by automated call centresto provide service that customers can access by phone using theirvoice and the phones key pad Vocal interfaces have to be designedto cope with errors in speech recognition and ungrammatical orout of domain responses by users Simple vocal interfaces directthe user to respond in narrow and predictable ways that can becharacterized by a speech grammar Errors can be handled viarepeating or rephrasing the prompt or by giving users the choiceof using the key pad Some relevant existing W3C specificationsare

bull Voice Extensible Markup Language (VoiceXML)bull Speech Recognition Grammar Specification (SRGS)bull Semantic Interpretation for Speech Recognition (SISR)bull Speech Synthesis Mark Language (SSML)bull Pronunciation Lexicon Specification (PLS)bull Emotion Markup Language (EmotionML)bull Voice Browser Call Control (CCXML)bull State Chart XML (SCXML)

VoiceXML is similar in some respects to the Hypertext MarkupLanguage (HTML) in its use of links and forms VoiceXML alsoprovides support for spoken dialogues in terms of error handlingand the use of complementary languages such as SRGS for speechgrammars and SSML for control of speech synthesis andprerecorded speech

The Serenoa framework can be applied to vocal interfacesdescribed in VoiceXML where the the speech grammars can bereadily derived This is the case for applications involvingnavigation through a tree of menus where the user is directed torepeat one of the choices given in a prompt or to tap the key padwith the number of the choice eg

M Do you want news sports or weatherU weatherM the weather today will be cold and windy with a chance of rain

9

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 4: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

1 IntroductionThis report describes standardization actions for the Serenoaproject and will consider opportunities for standardization currentprogress and future plans Our motivation for work onstandardization is to encourage the development and uptake ofinteroperable tools at both design and run-time for context awaremodel-based user interfaces

The following diagram illustrates the Serenoa Architecture andmany of the components shown will be considered in later sectionsof this report from the perspective of their potential forstandardization

For an introduction to the architecture and the benefits for a rangeof stakeholders you are invited to read the Serenoa White Paper

bull Serenoa White Paper (PDF)

2 Potential opportunities forstandardizationThis section reviews the different areas of work underway in theSerenoa project and provides a brief account of their potential forstandardization

21 Task Models

Task models provide a means for describing the set of tasksinvolved in an interactive system how the tasks decompose into

4

subtasks which tasks are to be carried out by the user the systemor both and the temporal sequence and inter-dependencies oftasks Task models enable the user interaction to be described andreviewed without being distracted by the details of the userinterface As such task models are not intended to be a completedescription

The primary task modeling language in Serenoa isConcurTaskTrees (CTT) This has good prospects forstandardization and would enable interoperable exchange of taskmodels between different user interface design tools See thesection on W3C MBUI Working Group for information on how thisis proceeding

22 Domain Models

The general architecture for Serenoa assumes a clean separationbetween the user interface and the application back-end Theinterface is defined through a domain model with named propertiesand methods Each property can have an atomic value such as aboolean a number or a string Alternatively a property can have astructured value with subsidiary properties and methods Propertyvalues method arguments and return values are described with atype language The domain model may also include a means for thesystem to signal events or exceptions for example anasynchronous change in the context of use or an error in the usersinput A further consideration is whether a method is synchronousor asynchronous ie it takes sufficient time to execute to have anoticeable impact on the user experience

Serenoa has so far avoided defining a separate formal language fordomain models and instead has embedded a limited treatment aspart of the abstract user interface (ASFE-DL) An adequateformalization of domain models will be essential for interoperableinterchange of user interface designs The precise requirementswill depend on the kinds of interactive systems that are beingtargeted

23 Abstract UI Models

In the Serenoa architecture abstract user interface design modelsdescribe interactive systems at a greater level of detail than iscommonly the case for task models but are still independent of thetarget platforms and modes of interaction The ASFE-DL languagecan be loosely described as follows

At the top level the abstract user interface can be described interms of a set of inter-related dialogues Each dialogue has a set of

5

interactors which can be thought of as abstract versions of userinterface controls Each interactor is bound to the domain model aswell as a variety of properties

There is a lot of potential for standardizing an abstract userinterface design language However there are many more suchlanguages than is the case for task models This will make it harderto standardize due to the need to forge bridges between differentcamps through the establishment of common use cases a sharedvocabulary and a synthesis of ideas As such ASFE-DL will be justone input into the standardization process

The list of existing alternatives for AUIs is quite lengthy (Souchonand Vanderdonckt 2003) Next we will provide more detailedinformation regarding the two AUI languages that comprise theconsortiums portfolio of authored and co-authored languages inthis field namely UsiXML and MARIA

The USer Interface EXtensible Markup Language (UsiXML)(Limbourg et al 2005) is an XML-compliant mark-up language todescribe user interfaces for multiple contexts and differentmodalities UsiXML allows also non-developers to use the languageto describe user interfaces mainly because the elements of the UIcan be described at a high level regardless of the platform of useThe UsiXML language was submitted for a standardisation actionplan in the context of the Similar network of excellence and of theOpen Interface European project

MARIA (Model-based language for Interactive Applications)(Paternograve et al 2009) is a universal declarative multipleabstraction-level XML-based language for modelling interactiveapplications in ubiquitous environments For designers of multi-device user interfaces one advantage of using a multi-layerdescription for specifying UIs is that they do not have to learn allthe details of the many possible implementation languagessupported by the various devices but they can reason in abstractterms without being tied to a particular UI modality or even worseimplementation language In this way they can better focus on thesemantics of the interaction namely what the intended goal of theinteraction is regardless of the details and specificities of theparticular environment considered

24 Concrete UI Models

The concrete user interface involves a commitment to a class ofdevice and modes of interaction Some typical examples areexamined in the following subsections There are quite a fewexisting user interface languages at this level of abstraction Some

6

of these are widely deployed proprietary solutions where thevendor may feel little imperative to add support for interoperableinterchange of user interface designs An open standard is likely tohave a tough time in widening its support beyond a relatively smallcommunity of early adopters The larger the community the easierit is to gather the resources needed to create and maintaineffective easy to use tools and documentation This is true for bothopen source and proprietary solutions

Some examples of existing concrete user interface languages

bull UIML - early example of a user interface markup languagebull MXML - introduced by Macromedia for compilation into

Flash SWFbull XUL - introduced by Mozilla Foundation for the Gecko enginebull XAML - introduced by Microsoft for use with their NET

frameworkbull OpenLazlo (LZX) - introduced by Lazlo Systems for their

presentation serverbull MARIA - developed by ISTI-CNR and combining abstract

and concrete UIbull XForms - developed by W3C for rich forms interfaces

241 WIMP (desktop GUI)

The abbreviation WIMP stands for windows icons menuspointer and describes the kind of graphical user interfacecommon on desktop computers running operating systems such asMicrosoft Windows MacOS and Linux + X Windows WIMP userinterfaces were originally developed by Xerox in the earlyseventies but came to popular attention through the AppleMacintosh in the mid-eighties and later Microsoft Windows Aconcrete user interface modelling language for WIMP platformscan build upon a wealth of experience Some examples of commonfeatures include

bull scroll-able windows inline and pop-up dialoguesbull click double click drag and drop idiomsbull window minimization maximization and close buttonsbull icons for minimized applications and as clickable buttonsbull tab controls for groups of related panesbull control bars with subsidiary controlsbull drop down menus and combo boxesbull Keyboard short cuts as alternatives to using the mouse

trackpadbull single and multi-line text boxesbull captioned radio buttonsbull captioned check boxes

7

bull updown spinnersbull buttons with text and icons as captionsbull named boxes for grouping related controlsbull a variety of layout policies eg absolute horizontal vertical

grid and table layouts

Graphical editors for creating WIMP user interfaces typicallyconsist of a palette of controls that can be dragged on to a canvasOnce there each control has a set of associated properties that youcan update through a property sheet These can be used to attachthe desired behaviour and it is common to define this with ascripting language that bridges the user interface controls and theapplication back-end

One challenge for WIMP user interfaces is adapting to varyingwindow sizes and resolutions To some extent this can be addressedthrough layout policies that make the best use of the availablespace The end user may be able to vary the font size Scrollablewindows make it possible to view a large window in a smallerscreen area However large changes in window size and resolutioncall for more drastic adaptations and one way to address this viasplitting the user interface design into multiple concrete userinterface models aimed at different sizes of window

242 Touch-based GUI (smart phones and tablets)

In the last few years there has been a rapid deployment of phonesand tablets featuring a high resolution colour screen with a multi-touch sensor Touch-based devices typically lack traditionalkeyboards and have given rise to a new set of user interfacedesign patterns Some common features include

bull tap double tap long tap drag and dropbull two finger pinch stretch and zoombull swipe to panbull single rather than multiple windowsbull background servicesbull pop-up notificationsbull icons for launching applicationsbull suspend and resume semantics for applicationsbull orientation sensing and portraitlandscape adaptationbull ambient light level sensingbull proximity sensingbull GPS-based location sensingbull wide variety of display resolutionsbull Bluetooth USB and NFC interfacesbull variations in support for Web standards especially scripting

APIs

8

Further study is needed to see just how practical it is to define andstandardize a common concrete user interface language fordifferent touch-based platforms such as Apples iOS and GooglesAndroid Variations across devices create significant challenges fordevelopers although some of this can be hidden through the use oflibraries

243 Vocal UI

Vocal user interfaces are commonly used by automated call centresto provide service that customers can access by phone using theirvoice and the phones key pad Vocal interfaces have to be designedto cope with errors in speech recognition and ungrammatical orout of domain responses by users Simple vocal interfaces directthe user to respond in narrow and predictable ways that can becharacterized by a speech grammar Errors can be handled viarepeating or rephrasing the prompt or by giving users the choiceof using the key pad Some relevant existing W3C specificationsare

bull Voice Extensible Markup Language (VoiceXML)bull Speech Recognition Grammar Specification (SRGS)bull Semantic Interpretation for Speech Recognition (SISR)bull Speech Synthesis Mark Language (SSML)bull Pronunciation Lexicon Specification (PLS)bull Emotion Markup Language (EmotionML)bull Voice Browser Call Control (CCXML)bull State Chart XML (SCXML)

VoiceXML is similar in some respects to the Hypertext MarkupLanguage (HTML) in its use of links and forms VoiceXML alsoprovides support for spoken dialogues in terms of error handlingand the use of complementary languages such as SRGS for speechgrammars and SSML for control of speech synthesis andprerecorded speech

The Serenoa framework can be applied to vocal interfacesdescribed in VoiceXML where the the speech grammars can bereadily derived This is the case for applications involvingnavigation through a tree of menus where the user is directed torepeat one of the choices given in a prompt or to tap the key padwith the number of the choice eg

M Do you want news sports or weatherU weatherM the weather today will be cold and windy with a chance of rain

9

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 5: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

subtasks which tasks are to be carried out by the user the systemor both and the temporal sequence and inter-dependencies oftasks Task models enable the user interaction to be described andreviewed without being distracted by the details of the userinterface As such task models are not intended to be a completedescription

The primary task modeling language in Serenoa isConcurTaskTrees (CTT) This has good prospects forstandardization and would enable interoperable exchange of taskmodels between different user interface design tools See thesection on W3C MBUI Working Group for information on how thisis proceeding

22 Domain Models

The general architecture for Serenoa assumes a clean separationbetween the user interface and the application back-end Theinterface is defined through a domain model with named propertiesand methods Each property can have an atomic value such as aboolean a number or a string Alternatively a property can have astructured value with subsidiary properties and methods Propertyvalues method arguments and return values are described with atype language The domain model may also include a means for thesystem to signal events or exceptions for example anasynchronous change in the context of use or an error in the usersinput A further consideration is whether a method is synchronousor asynchronous ie it takes sufficient time to execute to have anoticeable impact on the user experience

Serenoa has so far avoided defining a separate formal language fordomain models and instead has embedded a limited treatment aspart of the abstract user interface (ASFE-DL) An adequateformalization of domain models will be essential for interoperableinterchange of user interface designs The precise requirementswill depend on the kinds of interactive systems that are beingtargeted

23 Abstract UI Models

In the Serenoa architecture abstract user interface design modelsdescribe interactive systems at a greater level of detail than iscommonly the case for task models but are still independent of thetarget platforms and modes of interaction The ASFE-DL languagecan be loosely described as follows

At the top level the abstract user interface can be described interms of a set of inter-related dialogues Each dialogue has a set of

5

interactors which can be thought of as abstract versions of userinterface controls Each interactor is bound to the domain model aswell as a variety of properties

There is a lot of potential for standardizing an abstract userinterface design language However there are many more suchlanguages than is the case for task models This will make it harderto standardize due to the need to forge bridges between differentcamps through the establishment of common use cases a sharedvocabulary and a synthesis of ideas As such ASFE-DL will be justone input into the standardization process

The list of existing alternatives for AUIs is quite lengthy (Souchonand Vanderdonckt 2003) Next we will provide more detailedinformation regarding the two AUI languages that comprise theconsortiums portfolio of authored and co-authored languages inthis field namely UsiXML and MARIA

The USer Interface EXtensible Markup Language (UsiXML)(Limbourg et al 2005) is an XML-compliant mark-up language todescribe user interfaces for multiple contexts and differentmodalities UsiXML allows also non-developers to use the languageto describe user interfaces mainly because the elements of the UIcan be described at a high level regardless of the platform of useThe UsiXML language was submitted for a standardisation actionplan in the context of the Similar network of excellence and of theOpen Interface European project

MARIA (Model-based language for Interactive Applications)(Paternograve et al 2009) is a universal declarative multipleabstraction-level XML-based language for modelling interactiveapplications in ubiquitous environments For designers of multi-device user interfaces one advantage of using a multi-layerdescription for specifying UIs is that they do not have to learn allthe details of the many possible implementation languagessupported by the various devices but they can reason in abstractterms without being tied to a particular UI modality or even worseimplementation language In this way they can better focus on thesemantics of the interaction namely what the intended goal of theinteraction is regardless of the details and specificities of theparticular environment considered

24 Concrete UI Models

The concrete user interface involves a commitment to a class ofdevice and modes of interaction Some typical examples areexamined in the following subsections There are quite a fewexisting user interface languages at this level of abstraction Some

6

of these are widely deployed proprietary solutions where thevendor may feel little imperative to add support for interoperableinterchange of user interface designs An open standard is likely tohave a tough time in widening its support beyond a relatively smallcommunity of early adopters The larger the community the easierit is to gather the resources needed to create and maintaineffective easy to use tools and documentation This is true for bothopen source and proprietary solutions

Some examples of existing concrete user interface languages

bull UIML - early example of a user interface markup languagebull MXML - introduced by Macromedia for compilation into

Flash SWFbull XUL - introduced by Mozilla Foundation for the Gecko enginebull XAML - introduced by Microsoft for use with their NET

frameworkbull OpenLazlo (LZX) - introduced by Lazlo Systems for their

presentation serverbull MARIA - developed by ISTI-CNR and combining abstract

and concrete UIbull XForms - developed by W3C for rich forms interfaces

241 WIMP (desktop GUI)

The abbreviation WIMP stands for windows icons menuspointer and describes the kind of graphical user interfacecommon on desktop computers running operating systems such asMicrosoft Windows MacOS and Linux + X Windows WIMP userinterfaces were originally developed by Xerox in the earlyseventies but came to popular attention through the AppleMacintosh in the mid-eighties and later Microsoft Windows Aconcrete user interface modelling language for WIMP platformscan build upon a wealth of experience Some examples of commonfeatures include

bull scroll-able windows inline and pop-up dialoguesbull click double click drag and drop idiomsbull window minimization maximization and close buttonsbull icons for minimized applications and as clickable buttonsbull tab controls for groups of related panesbull control bars with subsidiary controlsbull drop down menus and combo boxesbull Keyboard short cuts as alternatives to using the mouse

trackpadbull single and multi-line text boxesbull captioned radio buttonsbull captioned check boxes

7

bull updown spinnersbull buttons with text and icons as captionsbull named boxes for grouping related controlsbull a variety of layout policies eg absolute horizontal vertical

grid and table layouts

Graphical editors for creating WIMP user interfaces typicallyconsist of a palette of controls that can be dragged on to a canvasOnce there each control has a set of associated properties that youcan update through a property sheet These can be used to attachthe desired behaviour and it is common to define this with ascripting language that bridges the user interface controls and theapplication back-end

One challenge for WIMP user interfaces is adapting to varyingwindow sizes and resolutions To some extent this can be addressedthrough layout policies that make the best use of the availablespace The end user may be able to vary the font size Scrollablewindows make it possible to view a large window in a smallerscreen area However large changes in window size and resolutioncall for more drastic adaptations and one way to address this viasplitting the user interface design into multiple concrete userinterface models aimed at different sizes of window

242 Touch-based GUI (smart phones and tablets)

In the last few years there has been a rapid deployment of phonesand tablets featuring a high resolution colour screen with a multi-touch sensor Touch-based devices typically lack traditionalkeyboards and have given rise to a new set of user interfacedesign patterns Some common features include

bull tap double tap long tap drag and dropbull two finger pinch stretch and zoombull swipe to panbull single rather than multiple windowsbull background servicesbull pop-up notificationsbull icons for launching applicationsbull suspend and resume semantics for applicationsbull orientation sensing and portraitlandscape adaptationbull ambient light level sensingbull proximity sensingbull GPS-based location sensingbull wide variety of display resolutionsbull Bluetooth USB and NFC interfacesbull variations in support for Web standards especially scripting

APIs

8

Further study is needed to see just how practical it is to define andstandardize a common concrete user interface language fordifferent touch-based platforms such as Apples iOS and GooglesAndroid Variations across devices create significant challenges fordevelopers although some of this can be hidden through the use oflibraries

243 Vocal UI

Vocal user interfaces are commonly used by automated call centresto provide service that customers can access by phone using theirvoice and the phones key pad Vocal interfaces have to be designedto cope with errors in speech recognition and ungrammatical orout of domain responses by users Simple vocal interfaces directthe user to respond in narrow and predictable ways that can becharacterized by a speech grammar Errors can be handled viarepeating or rephrasing the prompt or by giving users the choiceof using the key pad Some relevant existing W3C specificationsare

bull Voice Extensible Markup Language (VoiceXML)bull Speech Recognition Grammar Specification (SRGS)bull Semantic Interpretation for Speech Recognition (SISR)bull Speech Synthesis Mark Language (SSML)bull Pronunciation Lexicon Specification (PLS)bull Emotion Markup Language (EmotionML)bull Voice Browser Call Control (CCXML)bull State Chart XML (SCXML)

VoiceXML is similar in some respects to the Hypertext MarkupLanguage (HTML) in its use of links and forms VoiceXML alsoprovides support for spoken dialogues in terms of error handlingand the use of complementary languages such as SRGS for speechgrammars and SSML for control of speech synthesis andprerecorded speech

The Serenoa framework can be applied to vocal interfacesdescribed in VoiceXML where the the speech grammars can bereadily derived This is the case for applications involvingnavigation through a tree of menus where the user is directed torepeat one of the choices given in a prompt or to tap the key padwith the number of the choice eg

M Do you want news sports or weatherU weatherM the weather today will be cold and windy with a chance of rain

9

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 6: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

interactors which can be thought of as abstract versions of userinterface controls Each interactor is bound to the domain model aswell as a variety of properties

There is a lot of potential for standardizing an abstract userinterface design language However there are many more suchlanguages than is the case for task models This will make it harderto standardize due to the need to forge bridges between differentcamps through the establishment of common use cases a sharedvocabulary and a synthesis of ideas As such ASFE-DL will be justone input into the standardization process

The list of existing alternatives for AUIs is quite lengthy (Souchonand Vanderdonckt 2003) Next we will provide more detailedinformation regarding the two AUI languages that comprise theconsortiums portfolio of authored and co-authored languages inthis field namely UsiXML and MARIA

The USer Interface EXtensible Markup Language (UsiXML)(Limbourg et al 2005) is an XML-compliant mark-up language todescribe user interfaces for multiple contexts and differentmodalities UsiXML allows also non-developers to use the languageto describe user interfaces mainly because the elements of the UIcan be described at a high level regardless of the platform of useThe UsiXML language was submitted for a standardisation actionplan in the context of the Similar network of excellence and of theOpen Interface European project

MARIA (Model-based language for Interactive Applications)(Paternograve et al 2009) is a universal declarative multipleabstraction-level XML-based language for modelling interactiveapplications in ubiquitous environments For designers of multi-device user interfaces one advantage of using a multi-layerdescription for specifying UIs is that they do not have to learn allthe details of the many possible implementation languagessupported by the various devices but they can reason in abstractterms without being tied to a particular UI modality or even worseimplementation language In this way they can better focus on thesemantics of the interaction namely what the intended goal of theinteraction is regardless of the details and specificities of theparticular environment considered

24 Concrete UI Models

The concrete user interface involves a commitment to a class ofdevice and modes of interaction Some typical examples areexamined in the following subsections There are quite a fewexisting user interface languages at this level of abstraction Some

6

of these are widely deployed proprietary solutions where thevendor may feel little imperative to add support for interoperableinterchange of user interface designs An open standard is likely tohave a tough time in widening its support beyond a relatively smallcommunity of early adopters The larger the community the easierit is to gather the resources needed to create and maintaineffective easy to use tools and documentation This is true for bothopen source and proprietary solutions

Some examples of existing concrete user interface languages

bull UIML - early example of a user interface markup languagebull MXML - introduced by Macromedia for compilation into

Flash SWFbull XUL - introduced by Mozilla Foundation for the Gecko enginebull XAML - introduced by Microsoft for use with their NET

frameworkbull OpenLazlo (LZX) - introduced by Lazlo Systems for their

presentation serverbull MARIA - developed by ISTI-CNR and combining abstract

and concrete UIbull XForms - developed by W3C for rich forms interfaces

241 WIMP (desktop GUI)

The abbreviation WIMP stands for windows icons menuspointer and describes the kind of graphical user interfacecommon on desktop computers running operating systems such asMicrosoft Windows MacOS and Linux + X Windows WIMP userinterfaces were originally developed by Xerox in the earlyseventies but came to popular attention through the AppleMacintosh in the mid-eighties and later Microsoft Windows Aconcrete user interface modelling language for WIMP platformscan build upon a wealth of experience Some examples of commonfeatures include

bull scroll-able windows inline and pop-up dialoguesbull click double click drag and drop idiomsbull window minimization maximization and close buttonsbull icons for minimized applications and as clickable buttonsbull tab controls for groups of related panesbull control bars with subsidiary controlsbull drop down menus and combo boxesbull Keyboard short cuts as alternatives to using the mouse

trackpadbull single and multi-line text boxesbull captioned radio buttonsbull captioned check boxes

7

bull updown spinnersbull buttons with text and icons as captionsbull named boxes for grouping related controlsbull a variety of layout policies eg absolute horizontal vertical

grid and table layouts

Graphical editors for creating WIMP user interfaces typicallyconsist of a palette of controls that can be dragged on to a canvasOnce there each control has a set of associated properties that youcan update through a property sheet These can be used to attachthe desired behaviour and it is common to define this with ascripting language that bridges the user interface controls and theapplication back-end

One challenge for WIMP user interfaces is adapting to varyingwindow sizes and resolutions To some extent this can be addressedthrough layout policies that make the best use of the availablespace The end user may be able to vary the font size Scrollablewindows make it possible to view a large window in a smallerscreen area However large changes in window size and resolutioncall for more drastic adaptations and one way to address this viasplitting the user interface design into multiple concrete userinterface models aimed at different sizes of window

242 Touch-based GUI (smart phones and tablets)

In the last few years there has been a rapid deployment of phonesand tablets featuring a high resolution colour screen with a multi-touch sensor Touch-based devices typically lack traditionalkeyboards and have given rise to a new set of user interfacedesign patterns Some common features include

bull tap double tap long tap drag and dropbull two finger pinch stretch and zoombull swipe to panbull single rather than multiple windowsbull background servicesbull pop-up notificationsbull icons for launching applicationsbull suspend and resume semantics for applicationsbull orientation sensing and portraitlandscape adaptationbull ambient light level sensingbull proximity sensingbull GPS-based location sensingbull wide variety of display resolutionsbull Bluetooth USB and NFC interfacesbull variations in support for Web standards especially scripting

APIs

8

Further study is needed to see just how practical it is to define andstandardize a common concrete user interface language fordifferent touch-based platforms such as Apples iOS and GooglesAndroid Variations across devices create significant challenges fordevelopers although some of this can be hidden through the use oflibraries

243 Vocal UI

Vocal user interfaces are commonly used by automated call centresto provide service that customers can access by phone using theirvoice and the phones key pad Vocal interfaces have to be designedto cope with errors in speech recognition and ungrammatical orout of domain responses by users Simple vocal interfaces directthe user to respond in narrow and predictable ways that can becharacterized by a speech grammar Errors can be handled viarepeating or rephrasing the prompt or by giving users the choiceof using the key pad Some relevant existing W3C specificationsare

bull Voice Extensible Markup Language (VoiceXML)bull Speech Recognition Grammar Specification (SRGS)bull Semantic Interpretation for Speech Recognition (SISR)bull Speech Synthesis Mark Language (SSML)bull Pronunciation Lexicon Specification (PLS)bull Emotion Markup Language (EmotionML)bull Voice Browser Call Control (CCXML)bull State Chart XML (SCXML)

VoiceXML is similar in some respects to the Hypertext MarkupLanguage (HTML) in its use of links and forms VoiceXML alsoprovides support for spoken dialogues in terms of error handlingand the use of complementary languages such as SRGS for speechgrammars and SSML for control of speech synthesis andprerecorded speech

The Serenoa framework can be applied to vocal interfacesdescribed in VoiceXML where the the speech grammars can bereadily derived This is the case for applications involvingnavigation through a tree of menus where the user is directed torepeat one of the choices given in a prompt or to tap the key padwith the number of the choice eg

M Do you want news sports or weatherU weatherM the weather today will be cold and windy with a chance of rain

9

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 7: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

of these are widely deployed proprietary solutions where thevendor may feel little imperative to add support for interoperableinterchange of user interface designs An open standard is likely tohave a tough time in widening its support beyond a relatively smallcommunity of early adopters The larger the community the easierit is to gather the resources needed to create and maintaineffective easy to use tools and documentation This is true for bothopen source and proprietary solutions

Some examples of existing concrete user interface languages

bull UIML - early example of a user interface markup languagebull MXML - introduced by Macromedia for compilation into

Flash SWFbull XUL - introduced by Mozilla Foundation for the Gecko enginebull XAML - introduced by Microsoft for use with their NET

frameworkbull OpenLazlo (LZX) - introduced by Lazlo Systems for their

presentation serverbull MARIA - developed by ISTI-CNR and combining abstract

and concrete UIbull XForms - developed by W3C for rich forms interfaces

241 WIMP (desktop GUI)

The abbreviation WIMP stands for windows icons menuspointer and describes the kind of graphical user interfacecommon on desktop computers running operating systems such asMicrosoft Windows MacOS and Linux + X Windows WIMP userinterfaces were originally developed by Xerox in the earlyseventies but came to popular attention through the AppleMacintosh in the mid-eighties and later Microsoft Windows Aconcrete user interface modelling language for WIMP platformscan build upon a wealth of experience Some examples of commonfeatures include

bull scroll-able windows inline and pop-up dialoguesbull click double click drag and drop idiomsbull window minimization maximization and close buttonsbull icons for minimized applications and as clickable buttonsbull tab controls for groups of related panesbull control bars with subsidiary controlsbull drop down menus and combo boxesbull Keyboard short cuts as alternatives to using the mouse

trackpadbull single and multi-line text boxesbull captioned radio buttonsbull captioned check boxes

7

bull updown spinnersbull buttons with text and icons as captionsbull named boxes for grouping related controlsbull a variety of layout policies eg absolute horizontal vertical

grid and table layouts

Graphical editors for creating WIMP user interfaces typicallyconsist of a palette of controls that can be dragged on to a canvasOnce there each control has a set of associated properties that youcan update through a property sheet These can be used to attachthe desired behaviour and it is common to define this with ascripting language that bridges the user interface controls and theapplication back-end

One challenge for WIMP user interfaces is adapting to varyingwindow sizes and resolutions To some extent this can be addressedthrough layout policies that make the best use of the availablespace The end user may be able to vary the font size Scrollablewindows make it possible to view a large window in a smallerscreen area However large changes in window size and resolutioncall for more drastic adaptations and one way to address this viasplitting the user interface design into multiple concrete userinterface models aimed at different sizes of window

242 Touch-based GUI (smart phones and tablets)

In the last few years there has been a rapid deployment of phonesand tablets featuring a high resolution colour screen with a multi-touch sensor Touch-based devices typically lack traditionalkeyboards and have given rise to a new set of user interfacedesign patterns Some common features include

bull tap double tap long tap drag and dropbull two finger pinch stretch and zoombull swipe to panbull single rather than multiple windowsbull background servicesbull pop-up notificationsbull icons for launching applicationsbull suspend and resume semantics for applicationsbull orientation sensing and portraitlandscape adaptationbull ambient light level sensingbull proximity sensingbull GPS-based location sensingbull wide variety of display resolutionsbull Bluetooth USB and NFC interfacesbull variations in support for Web standards especially scripting

APIs

8

Further study is needed to see just how practical it is to define andstandardize a common concrete user interface language fordifferent touch-based platforms such as Apples iOS and GooglesAndroid Variations across devices create significant challenges fordevelopers although some of this can be hidden through the use oflibraries

243 Vocal UI

Vocal user interfaces are commonly used by automated call centresto provide service that customers can access by phone using theirvoice and the phones key pad Vocal interfaces have to be designedto cope with errors in speech recognition and ungrammatical orout of domain responses by users Simple vocal interfaces directthe user to respond in narrow and predictable ways that can becharacterized by a speech grammar Errors can be handled viarepeating or rephrasing the prompt or by giving users the choiceof using the key pad Some relevant existing W3C specificationsare

bull Voice Extensible Markup Language (VoiceXML)bull Speech Recognition Grammar Specification (SRGS)bull Semantic Interpretation for Speech Recognition (SISR)bull Speech Synthesis Mark Language (SSML)bull Pronunciation Lexicon Specification (PLS)bull Emotion Markup Language (EmotionML)bull Voice Browser Call Control (CCXML)bull State Chart XML (SCXML)

VoiceXML is similar in some respects to the Hypertext MarkupLanguage (HTML) in its use of links and forms VoiceXML alsoprovides support for spoken dialogues in terms of error handlingand the use of complementary languages such as SRGS for speechgrammars and SSML for control of speech synthesis andprerecorded speech

The Serenoa framework can be applied to vocal interfacesdescribed in VoiceXML where the the speech grammars can bereadily derived This is the case for applications involvingnavigation through a tree of menus where the user is directed torepeat one of the choices given in a prompt or to tap the key padwith the number of the choice eg

M Do you want news sports or weatherU weatherM the weather today will be cold and windy with a chance of rain

9

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 8: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

bull updown spinnersbull buttons with text and icons as captionsbull named boxes for grouping related controlsbull a variety of layout policies eg absolute horizontal vertical

grid and table layouts

Graphical editors for creating WIMP user interfaces typicallyconsist of a palette of controls that can be dragged on to a canvasOnce there each control has a set of associated properties that youcan update through a property sheet These can be used to attachthe desired behaviour and it is common to define this with ascripting language that bridges the user interface controls and theapplication back-end

One challenge for WIMP user interfaces is adapting to varyingwindow sizes and resolutions To some extent this can be addressedthrough layout policies that make the best use of the availablespace The end user may be able to vary the font size Scrollablewindows make it possible to view a large window in a smallerscreen area However large changes in window size and resolutioncall for more drastic adaptations and one way to address this viasplitting the user interface design into multiple concrete userinterface models aimed at different sizes of window

242 Touch-based GUI (smart phones and tablets)

In the last few years there has been a rapid deployment of phonesand tablets featuring a high resolution colour screen with a multi-touch sensor Touch-based devices typically lack traditionalkeyboards and have given rise to a new set of user interfacedesign patterns Some common features include

bull tap double tap long tap drag and dropbull two finger pinch stretch and zoombull swipe to panbull single rather than multiple windowsbull background servicesbull pop-up notificationsbull icons for launching applicationsbull suspend and resume semantics for applicationsbull orientation sensing and portraitlandscape adaptationbull ambient light level sensingbull proximity sensingbull GPS-based location sensingbull wide variety of display resolutionsbull Bluetooth USB and NFC interfacesbull variations in support for Web standards especially scripting

APIs

8

Further study is needed to see just how practical it is to define andstandardize a common concrete user interface language fordifferent touch-based platforms such as Apples iOS and GooglesAndroid Variations across devices create significant challenges fordevelopers although some of this can be hidden through the use oflibraries

243 Vocal UI

Vocal user interfaces are commonly used by automated call centresto provide service that customers can access by phone using theirvoice and the phones key pad Vocal interfaces have to be designedto cope with errors in speech recognition and ungrammatical orout of domain responses by users Simple vocal interfaces directthe user to respond in narrow and predictable ways that can becharacterized by a speech grammar Errors can be handled viarepeating or rephrasing the prompt or by giving users the choiceof using the key pad Some relevant existing W3C specificationsare

bull Voice Extensible Markup Language (VoiceXML)bull Speech Recognition Grammar Specification (SRGS)bull Semantic Interpretation for Speech Recognition (SISR)bull Speech Synthesis Mark Language (SSML)bull Pronunciation Lexicon Specification (PLS)bull Emotion Markup Language (EmotionML)bull Voice Browser Call Control (CCXML)bull State Chart XML (SCXML)

VoiceXML is similar in some respects to the Hypertext MarkupLanguage (HTML) in its use of links and forms VoiceXML alsoprovides support for spoken dialogues in terms of error handlingand the use of complementary languages such as SRGS for speechgrammars and SSML for control of speech synthesis andprerecorded speech

The Serenoa framework can be applied to vocal interfacesdescribed in VoiceXML where the the speech grammars can bereadily derived This is the case for applications involvingnavigation through a tree of menus where the user is directed torepeat one of the choices given in a prompt or to tap the key padwith the number of the choice eg

M Do you want news sports or weatherU weatherM the weather today will be cold and windy with a chance of rain

9

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 9: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

Further study is needed to see just how practical it is to define andstandardize a common concrete user interface language fordifferent touch-based platforms such as Apples iOS and GooglesAndroid Variations across devices create significant challenges fordevelopers although some of this can be hidden through the use oflibraries

243 Vocal UI

Vocal user interfaces are commonly used by automated call centresto provide service that customers can access by phone using theirvoice and the phones key pad Vocal interfaces have to be designedto cope with errors in speech recognition and ungrammatical orout of domain responses by users Simple vocal interfaces directthe user to respond in narrow and predictable ways that can becharacterized by a speech grammar Errors can be handled viarepeating or rephrasing the prompt or by giving users the choiceof using the key pad Some relevant existing W3C specificationsare

bull Voice Extensible Markup Language (VoiceXML)bull Speech Recognition Grammar Specification (SRGS)bull Semantic Interpretation for Speech Recognition (SISR)bull Speech Synthesis Mark Language (SSML)bull Pronunciation Lexicon Specification (PLS)bull Emotion Markup Language (EmotionML)bull Voice Browser Call Control (CCXML)bull State Chart XML (SCXML)

VoiceXML is similar in some respects to the Hypertext MarkupLanguage (HTML) in its use of links and forms VoiceXML alsoprovides support for spoken dialogues in terms of error handlingand the use of complementary languages such as SRGS for speechgrammars and SSML for control of speech synthesis andprerecorded speech

The Serenoa framework can be applied to vocal interfacesdescribed in VoiceXML where the the speech grammars can bereadily derived This is the case for applications involvingnavigation through a tree of menus where the user is directed torepeat one of the choices given in a prompt or to tap the key padwith the number of the choice eg

M Do you want news sports or weatherU weatherM the weather today will be cold and windy with a chance of rain

9

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 10: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

VoiceXML corresponds to the final user interface layer in theCameleon Reference Framework and could be complemented by ahigher level concrete user interface models for vocal interfacesFurther work is needed to clarify the requirements beforestandardization can take place

More sophisticated voice interfaces encourage users to answer inan open ended way where a statistical language model is used toclassify the users utterance based upon an analysis of largenumbers of recorded calls The classification triggers a statetransition network encoding the dialogue model The followingexample is from How may I help you by Gorin Parker Sachs andWilpon Proc of IVITA October 1996

M How may I help youU Can you tell me how much it is to TokyoM You want to know the cost of a callU Yes thats rightM Please hold for rate information

This kind of vocal interface is a poor fit for the Serenoa frameworkas it requires specialized tools for annotating and analyzing largenumbers of calls (the above paper cited the use of a corpus of over10000 calls) and for the development of utterance classificationhierarchies and state transition dialogue models

State Chart extensible Markup Language (SCXML)

bull httpwwww3orgTRscxml

SCXML provides a means to describe state transition models ofbehaviour and can be applied to vocal and multimodal userinterfaces

244 Multimodal UI

Multimodal user interfaces allow users to provide input withmultiple modes eg typing or speaking A single utterance caninvolve multiple modes eg saying tell me more about this onewhile tapping at a point on the screen Likewise the system canrespond with multiple modes of output eg visual aural andtactile using the screen to present something playing recorded orsynthetic speech and vibrating the device

The wide range of possible approaches to multimodal userinterfaces has hindered the development of standards Some workthat has been considered includes

10

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 11: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

bull Using spoken requests to play video or music tracks basedupon the Voice Extensible Markup Language (VoiceXML)

bull Loosely coupling vocal and graphical user interfaces wherethese are respectively described with VoiceXML and HTMLsee httpwwww3orgTRmmi-arch

bull Extending HTML with JavaScript APIs for vocal input andoutput see httpwwww3org2005IncubatorhtmlspeechXGR-htmlspeech-20111206

The W3C Multimodal Interaction Working Group has worked on

bull The Extensible Multimodal Annotation Markup Language(EMMA) which defines a markup language for containingand annotating the interpretation of user input eg speechand deictic gestures

bull Ink Markup Language (InkML) which defines a markuplanguage for capturing traces made by a stylus or finger on atouch sensitive surface This opens the way to userinterfaces where the user writes rather than types or speaksthe information to be input

Human face to face communication is richly multimodal with facialgestures and body language that complements what is said Somemultimodal interfaces try to replicate this for system output bycombining speech with an animated avatar (a talking head)Handwriting and speech also lend themselves to biometrictechniques for user authentication perhaps in combination of facerecognition using video input

Serenoa could address a limited class of multimodal userinterfaces but it is unclear that it is timely to take this tostandardization A possible exception is for automotive applicationswhere multimodal interaction can be used to mitigate concernsover driver distraction where drivers need to keep focused on thetask of driving safely

245 Industrial UI

There is plenty of potential for applying the Serenoa framework toindustrial settings Manufacturing processes frequently involvecomplex user interfaces for monitoring and control purposes Thiscan combine mechanically operated values and sensors togetherwith sophisticated computer based interactive displays Model-based user interface design techniques could be applied to reducethe cost for designing and updating industrial user interfaces Thissuggests the need for work on concrete user interface modellinglanguages that reflect the kinds of sensors and actuators needed onthe factory floor The need for specialized models for context

11

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 12: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

awareness of interactive systems in industrial settings is covered ina later section

25 Context of Use

This section looks at the context of use and its role in supportingadaptation starting with general considerations and then taking alook at industrial and automotive settings

251 General Considerations

What is the context of use and how does it assist in enablingcontext aware interactive systems There are three main aspects

1 the capabilities of the device hosting the user interface2 the users preferences and capabilities3 the environment in which the interaction is taking place

Some device capabilities are static eg the size and resolution ofthe screen but others change dynamically eg the orientation ofthe screen as portrait or landscape Designers need to be able totarget a range of devices as people are increasingly expecting toaccess applications on different devices a high resolution desktopcomputer with a mouse pointer a smart phone a tablet a TV oreven a car Model-based techniques can help by separating outdifferent levels of concerns but this is dependent on understandingthe context of use

We are all individuals and it is natural for us to expect thatinteractive systems can adapt to our preferences and crucially toour own limitations for instance colour blindness a need forincreased contrast and for big fonts to cope with limited visionaural interfaces when we cant see (or have our eyes busy withother matters) Some of us have limited dexterity and havedifficulty with operating a mouse pointer or touch screen Biggercontrols are needed along with the possibility of using assistivetechnology

A further consideration is enabling applications to adapt to ouremotional state based upon the means to detect emotional cuesfrom speech In the car researchers are using gaze tracking to seewhat we are looking at and assessing how tired we are from thefrequency of which we blink as well as the smoothness by whichwe are operating the car

Finally we are influenced by the environment in which we areusing interactive systems Hotcold quietnoisy brightly litdarkthe level of distractions and so forth Other factors include the

12

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 13: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

battery level in mobile device and the robustness or lack of theconnection to the network

From a standardization perspective there is an opportunity toformalize the conceptual models for the context of use and howthese are exposed through application programming interfaces(APIs) and as properties in the conditions of adaptation rules

252 Industry Fulfilment of Safety Guidelines

Interactive systems for industrial settings need to adapt to dynamicchanges in the context of use A robot arm may need to be keptstationary to allow a human to safely interact with the system Theapplication thus needs to be able to alter its behaviour based uponsensing the proximity of the user Another case is where the usermust be on hand to monitor the situation and take control ofpotentially dangerous processes This suggests the need forspecialized models for the context of use in industrial settings

253 Automotive Mitigation of Driver Distraction

Interactive systems in the car pose interesting challenges in theneed to keep the driver safely focused on the road and the risk oflegal liability is that isnt handled effectively

Modern cars have increasingly sophisticated sensors and externalsources of information Some examples include

bull imminent collision detection and braking controlbull dynamic adjustment of road-handling to match current

conditions eg when there is ice or water on the roadbull detection of when the car is veering out of the lanebull automatic dipping of headlights in the face of oncoming

trafficbull automatic sensing of road signsbull adaptation for night-time operationbull car to car exchanges of information on upcoming hazardsbull access to the current location via GPSbull access to live traffic data over mobile networksbull dead-spot cameras for easier reversingbull sophisticated sensors in many of the cars internal systems

Drivers need to be kept aware of the situation and free ofdistractions that could increase the risk of an accident Phoneconversations and entertainment services need to be suspendedwhen appropriate eg when approaching a junction or the carahead is slowing down Safety related alerts need to be clearlyrecognizable under all conditions Visual alerts may be ineffective

13

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 14: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

at night due the lights of oncoming traffic or in the day when thesun is low on the horizon Likewise aural alerts may be ineffectivewhen driving with the windows down or when the passengers aretalking noisily

Automotive represents a good proving ground for the Serenoaideas for context adaptation W3C plans to hold a Web andAutomotive workshop in late 2012 and to launch standards workthereafter This provides an opportunity for standardizing modelsfor the context of use including models of cognitive load as well asan automotive oriented version of AAL-DL

26 Multidimensional Adaptation of Service FrontEnds

The theoretical framework for Serenoa is structured in threecomponents

bull Context-aware Reference Framework (CARF)bull Context-aware Design Space (CADS)bull Context-aware Reference Ontology (CARFO)

Together these provide the concepts and the means for definingimplementing and evaluating context aware interactive systems

261 CARF Reference Framework

The Context-aware Reference Framework (CARF) provides coreconcepts for defining and implementing adaptive and adaptablesystems

The above figure illustrates the main axes

bull What kinds of things are being adapted eg thenavigational flow or the size of text and images

bull Who is triggering and controlling the adaption process egthe end user the system or a third party

bull When the adaptation takes place eg design-time or run-time

14

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 15: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

bull Where adaptation takes place eg in the device hosting theuser interface in the cloud or at some proxy entity

bull Which aspects of the context are involved in the adaptationbull How is the adaptation performed ie what strategies and

tactics are involved

It is unclear how CARF could be standardized An informativedescription is fine but the question to be answered is how CARF isexposed in design tools and at during the run-time of interactivesystems

262 CADS Design Space

The Context-aware Design Space (CADS) provides a means toanalyse evaluate and compare multiple applications in regards totheir coverage level of adaptation eg for dimensions such asmodality types

CADS defines a number of axes for considering adaptation All ofthese axes form an ordered dimension however their levels notalways have equal proportions These are illustrated in thefollowing figure

15

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 16: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

Designers can use CADS as a conceptual model to guide theirthinking It can also provide a means for classifying collections ofadaptation rules It is unclear at this point just how CADS wouldfeed into standardization except as a shared vocabulary for talkingabout specific techniques

263 CARFO Multidimensional Adaptation Ontology

The Context-aware Reference Ontology (CARFO) formalizes theconcepts and relationships expressed in the Context-awareReference Framework (CARF) CARFO enables browsing andsearch for information relevant to defining and implementing theadaptation process This is useful throughout all of the phases of aninteractive system design specification implementation andevaluation

Standardizing CARFO is essentially a matter of building a broadconsenus around the concepts and relationships expressed in theontology This can be useful in ensuring a common vocabulary evenif the ontology isnt used directly in the authoring and run-timecomponents of interactive systems

27 Design-time adaptation rules

Design-time adaptation rules have two main roles

1 To propagate the effects of changes across layers in theCameleon reference framework

2 To provide a check on whether a user interface designcomplies to guidelines eg corporate standards aimed atensuring consistency across user interfaces

One way to represent adaptation rules is as follows

IF condition THEN conclusion

When executed in a forward chaining mode rules are found thatmatch the current state of a model and the conclusion is fired toupdate the model This process continues until all applicable ruleshave been fired If more than one rule applies at a given instance achoice has to be made eg execute the first matching rule or use arule weighting scheme to pick a rule Some rule engines permit amix of forward and backward (goal-driven) execution where rulesare picked based upon their conclusions and the rule engine thentries to find which further rules would match the conditions

Forward chaining production rules can be efficiently executed bytrading off memory against speed eg using variants of the RETE

16

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 17: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

algorithm Rule conditions can involve externally defined functionsprovided these are free of side-effects This provides for flexibilityin defining rule conditions Likewise the rule conclusions caninvoke external actions These can be invoked as a rule is fired orlater when all of the applicable rules have fired

To enable rules to respond to changes in models the rules can becast in the form of event-condition-action where an eventcorresponds to a change the user has made to the model Manualchanges to the abstract user interface can be propagated to each ofthe targets for the concrete user interface for instance desktopsmart phone and tablet Likewise manual changes to the concreteuser interface for a smart phone can be propagated up to theabstract user interface and down to other targets at the concreteuser interface layer

The set of rules act as an cooperative assistant that applies bestpractices to help the designer Sometimes additional informationand human judgement is required The rules can be written to passoff tasks to the human designer via a design agenda

One challenge is to ensure that the maintainability of the set ofrules as the number of rules increases This requires carefulattention to separation of different levels of detail so that highlevel rules avoid dealing with details that are better treated withlower level rules

The above has focused on IF-THEN (production rules) that canrespond to incremental changes in models An alternative approachis to focus on transformation rules that map complete models fromthe abstract user interface to models for the concrete userinterface W3Cs XSLT language provides a great deal of flexibilitybut at the cost of transparency maintainability Other work hasfocused on constrained transformation languages eg the ObjectManagement Groups QVT (QueryViewTransformation) languagesfor transforming models

There is an opportunity to standardize a rule language for design-time use When bringing this to W3C it will be important to showhow the rule language relates to W3Cs generic Rule InterchangeFramework (RIF)

Note that the Serenoa Advanced Adaptation Logic DescriptionLanguage (AAL-DL) is covered in a subsequent section

17

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 18: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

28 Run-time adaptation rules

Run-time rules are designed to describe how the user interfaceshould adapt to changes in the context of use This could be tomatch the users preferences or capabilities or to a change in theenvironment The event-condition-action pattern is well suited forthis purpose where events are changes in the context of use or inthe user interface state Serenoa is exploring this approach withthe Advanced Adaptation Logic Description Language (AAL-DL)

The examples considered so far have focused on high leveladaptations with the idea of invoking separate adaptation modulesto determine the detailed changes that need to be applied Thesemodules could be implemented with production rules but otherpossibilities include scripting languages or conventionalprogramming languages like Java

The Serenoa architecture shows the run-time as a group of threemodules

1 Context Manager2 Adaptation Engine3 Run-time Engine

The Context Manager keeps track of the context of use ieinformation about the user the device and the environment it isoperating in It provides support for querying the context of useand for signalling changes

The Adaptation Engine execute the AAL-DL rules as describedabove The Run-time Engine maps the concrete user interfacedesign to the final user interface in accordance with theadaptations suggested by the Adaptation Engine The architecturecan be implemented either in the cloud or in the device itselfwhere the resource constraints permit this

One challenge is preserving the state of the interaction whenapplying an adaptation to a change in the context of use Stateinformation can be held at the domain level the abstract userinterface and the concrete user interface

Some classes of adaptations can be compiled into the final userinterface For HTML pages adaptation can be implemented as partof the web page scripts or though style sheets with CSS MediaQueries This raises the challenge of how to compile high leveladaptation rules expressed in AAL-DL into the final user interface

18

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 19: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

The Advanced Adaptation Logic Description Language (AAL-DL)seems well suited for standardization although this may not bepractical until we have more experience of how well the run-timearchitecture performs in a variety of settings

29 Advanced Adaptation Logic DescriptionLanguage (AAL-DL)

One of the aims of Serenoa is to develop a high-level language fordeclarative descriptions of advanced adaptation logic (AAL-DL)This is described in detail in

bull Deliverable D331 AAL-DL Semantics Syntaxes andStylistics

AAL-DL as currently defined can be used for first order adaptationrules for a specific context of use and second order rules thatselect which first order rules to apply Further work is underconsideration for third order rules that act on second order ruleseg to influence usability performance and reliability

Current examples of AAL-DL focus on adaptation to eventssignalling changes in the context of use In principle it could alsobe used for design time transformation

The AAL_DL metamodel is as follows

This diagram just presents the main subclasses of the actionelement (create read update delete if while foreach for blockand invokeFunction) An XML Scheme has been specified for

19

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 20: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

interchange of AAL-DL rules but as yet there is not agreement on ahigh level syntax aimed at direct editing

Here is an example of a rule

bull If user is colour-blind then use alternative color palette

In XML this looks like

A significant challenge will be to explore the practicality ofenabling developers to work with a high level rule syntax ratherthan at the level expressed in the XML example

AAL-DL could be submitted to W3C as a basis for a rule languagehowever further work will be needed to demonstrate its practicaleffectiveness on a range of examples before the W3C Model-BasedUser Interfaces Working Group is likely to proceed withstandardizing an adaptation rule language In practice this issomething that would likely take place when the Working Group isrechartered in early 2014 ie after the Serenoa project comes toan end

20

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 21: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

210 Corporate Rules for Consistent UserExperience

Companies often wish to ensure that the user interfaces on theirproducts have a consistent look and feel that expresses the brandthe company is promoting It is still the case that many designersfocus first on the visual appearance by working with tools likeAdobe Illustrator to mock up the appearance of a user interfaceThis leads to costly manual processes for reviewing whether theresultant user interface designs match corporate standards

The Serenoa Framework has the potential to make this a lot easierthrough the separation of design concerns and the application ofdesign and run-time rule engines The rules can be written to verifyadherence to corporate standards as the user interface is beingdesigned At run-time business rules can be used to implementcorporate standards The concrete user interface languages can bedesigned to reflect the templates and components required Theprocess of transforming the concrete user interface into the finaluser interface can be designed to apply the corporate branded lookand feel (skinning the user interface)

Further work is needed to identify what changes are needed tosupport this in the rule language and its suitability forstandardization There is some potential for standardizing themeans for skinning the concrete user interface for particularclasses of target platforms

3 W3C Model-Based UI Working GroupThis section of the report describes standardization activities at theW3C on model-based user interface design

31 MBUI WG - Introduction

The W3C Model Based User Interfaces Working Group was formedon 17 October 2011 and provides the main target for standardizingwork from the Serenoa project This section will describe thehistory leading up to the formation of this Working Group itscharter the technical submissions received the current work itemsand future plans

32 MBUI WG History

When Tim Berners-Lee invented the World Wide Web at the start ofthe nineties he set out to ensure that it would be accessible from awide range of platforms Early examples include the NeXT

21

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 22: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

computer a sophisticated graphics workstation and dumb textterminals using the CERN Line Mode Browser By the mid-ninetiespopular browsers included Netscapes Navigator and MicrosoftsInternet Explorer The success of the latter meant that most peoplewere interacting with the Web from a desktop computer runningMicrosoft Windows Some websites even went as far as statingbest viewed in Internet Explorer

By the end of the nineties as the potential of mobile devices beganto get peoples attention the challenge arose for how to enabledesigners to create Web applications for use on desktop and mobiledevices W3C launched the Device Independence Working Group toaddress these challenges A set of draft device independenceprinciples were published in September 2001

bull httpwwww3orgTR2001WD-di-princ-20010918

This followed on from earlier work at W3C on CompositeCapabilityPreference Profiles (CCPP) a means for devices toadvertise their capabilities so that web sites could deliver contentadapted to the needs of each device That led to a W3CRecommendation for CCPP 10 in January 2004

bull httpwwww3orgTR2004REC-CCPP-struct-vocab-20040115

W3C went on to work on a device independent authoring language(DIAL) This combines HTML with simple rules according to thedevices capabilities

bull httpwwww3orgTR2007WD-dial-20070727

With DIAL adaptation could take place anywhere along thedelivery chain ie at the originating web site a proxy server or inthe browser CCPP and DIAL both failed to take off in practiceOne issue was that mobile device vendors failed to provideaccurate information on device capabilities Another was browserdevelopers had at that time little interest in device independencewith the exception of limited support for conditionals in stylesheets (CSS Media Queries)

bull httpwwww3orgTRCSS2mediahtml

This allowed you to provide different style rules for a limited set ofdevice categories

bull all - Suitable for all devicesbull braille - Intended for braille tactile feedback devicesbull embossed - Intended for paged braille printers

22

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 23: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

bull handheld - Intended for handheld devices (typically smallscreen limited bandwidth)

bull print - Intended for paged material and for documentsviewed on screen in print preview mode Please consult thesection on paged media for information about formattingissues that are specific to paged media

bull projection - Intended for projected presentations forexample projectors Please consult the section on pagedmedia for information about formatting issues that arespecific to paged media

bull screen - Intended primarily for color computer screensbull speech - Intended for speech synthesizers Note CSS2 had a

similar media type called aural for this purpose See theappendix on aural style sheets for details

bull tty- Intended for media using a fixed-pitch character grid(such as teletypes terminals or portable devices withlimited display capabilities) Authors should not use pixelunits with the tty media type

bull tv - Intended for television-type devices (low resolutioncolor limited-scrollability screens sound available)

Few browsers supported CSS media queries apart from screen andprint More recently the specification has added furthercapabilities and finally became a W3C Recommendation in June2012

bull httpwwww3orgTR2012REC-css3-mediaqueries-20120619

A further possibility is to use web page scripts to adapt the markupand presentation locally in the browser Each browser provides theuser agent string but by itself this doesnt provide sufficientinformation for effective adaptation The scripting APIs foraccessing information about the device are extremely limited Inpart this is driven by concerns over privacy The more information awebsite can determine about a device the easier it is to fingerprinta user and to build up a detailed picture of the users browsinghabits

DIAL CSS Media Queries and client side scripting all fail to tacklethe challenge of separating out different level of design concernsThis is where research work on model-based user interface designhas the most promise The next sections will describe how this waspicked up by W3C and the launch of the Model-Based UserInterfaces Working Group

23

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 24: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

321 MBUI Incubator Group

W3C work on model-based user interfaces started with apreliminary meeting in Pisa Italy on 23 July 2008 hosted by theIstituto di Scienze e Tecnologie dellInformazione and concludedwith the participants agreeing to work together on preparing adraft charter for a W3C Incubator Group

bull httpwwww3org200807model-based-uihtml

The first face to face meeting of the Model-Based User InterfacesIncubator Group was held on 24 October 2008 hosted by W3C atthe 2008 Technical Plenary in Mandelieu France The Charter andhome page for the Model-Based Interfaces Incubator Group can befound at

bull httpwwww3org2005Incubatormodel-based-uicharterbull httpwwww3org2005Incubatormodel-based-ui

Work proceeded via teleconferences and a wiki A second face toface meeting took place in Brussels on 11-12 June 2009 hosted bythe Universiteacute catholique de Louvain The Incubator Group reportwas published on 4 May 2010

bull httpwwww3org2005Incubatormodel-based-uiXGR-mbui-20100504

It provides an introduction to model-based UI design a survey ofthe state of the art an outline of motivating use cases and a casestudy of user interfaces in the digital home The concludingremarks cover suggested standardization work items

The publication of the Incubator Group report was followed by aWorkshop in Rome This is described in the next section

322 MBUI Workshop

The W3C Workshop on Future Standards for Model-Based UserInterfaces was held on 13-14 May 2010 in Rome hosted by theIstituto di Scienze e Tecnologie dellInformazione The websiteincludes the statements of interest submitted by participants theagenda and links to talks and the Workshop Report which can befound at

bull httpwwww3org201002mbuireporthtml

The Workshop was timed to follow the publication of the report ofthe W3C Model-Based UI Incubator Group Participants presented

24

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 25: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

model-based approaches from a variety of perspectives reflectingmany years of research work in this area The Workshops finalsession looked at the opportunity for launching standards work onmeta-models as a basis for exchanging models between differentmarkup languages The following photo shows the workshopparticipants

323 Formation of MBUI Working Group

The W3C Model-Based User Interfaces Working Group waslaunched on 17 October 2011 and is chartered until the 13November 2013 Work is proceeding with a mix of regularteleconferences the mailing list and wiki and face to facemeetings The first face to face was hosted by DFKI inKaiserslautern Germany on 9-10 February 2012

25

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 26: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

and the second on 14-15 June in Pisa hosted by ISTI-CNR

33 MBUI Working Group Charter

bull httpwwww3org201101mbui-wg-charter

26

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 27: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

The Working Group Charter defines the scope of the permittedwork items and the roadmap as envisioned when the charter cameinto effect The charter is subject to review by the W3C AdvisoryCommittee which includes one person per member organization(regardless of size) and the W3C Management Team The Model-Based User Interfaces (MBUI) Working Group is currentlychartered until 30 November 2013 The scope is as follows

Use cases and Requirements

As needed to guide and justify the design decisions for thedevelopment of the specifications

Specification of meta-models for interchange of modelsbetween authoring tools for (context aware) user interfacesfor web-based interactive application front ends

This could take the form of UML diagrams and OWL ontologiesand cover the various levels of abstraction (eg as defined in theCameleon reference framework as well as that needed to supportdynamic adaption to changes in the context)

Specification of a markup language and API which realizethe meta-models

This is expected to draw upon existing work such as (but notrestricted to) Concur Task Trees (CTT) Useware Markup Language(useML) UsiXML or UIML

Test assertions and Test suite for demonstratinginteroperability

This is needed to support progress along the W3CRecommendation Track and in particular to exit from theCandidate Recommendation phase

Model-based user interface design primer

An explanationguideline for how to apply the specifications tosupport the development of the associated use cases

Open Source Implementations

Working Group members may wish to develop open sourceimplementations of authoring tools to demonstrate the potentialand for use in developing and applying the test suite describedabove

Some features are explicitly out of scope for the Working Group

27

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 28: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

Defining markup and APIs for direct interpretation byinteractive application front ends (eg web browsers)

But where appropriate it should be feasible to define markupevents and APIs that are supported by libraries eg JavaScriptmodules This may be needed to support dynamic adaptation tochanges in the context

This restriction was included in the charter to reassure browservendors that there is no requirement for changes to Web browsersInstead the work on model-based user interface design is aimed atauthoring tools and associated run-time libraries that run on top ofbrowsers

331 Work Items

The expected deliverables are as follows

bull Recommendation Track specification for task modelsbull Recommendation Track specification for abstract user

interface modelsbull Working Group Note introducing model based user interface

design along with use casesbull Working Group Note defining a glossary of terms as used in

the other deliverables

W3C Recommendation Track specifications follow the followingstages This have been annotated with the dates the MBUIdeliverables were envisioned by the charter to reach each stage

1 First Public Working Draft - initial publication (expectedMarch 2012)

2 Last Call Working Draft - stable version (expectedSeptember 2012)

3 Candidate Recommendation - test suites andimplementation reports (expected February 2013)

4 Proposed Recommendation - reviewed by W3C AdvisoryCommittee (expected June 2013)

5 Recommendation - supplemented by errata (expectedAugust 2013)

In the preparatory work leading up to drafting the charter therewas general agreement that it would be best to focus initial workon standards for task models and abstract UI models Once this hasbeen achieved the next step will be to work on standards forconcrete UI models and context adaptation This would require re-chartering the MBUI Working Group for a further period Furtherdetails will be discussed in the conclusion to this report

28

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 29: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

34 MBUI Submissions

When the Model-Based User Interfaces Working Group was formedthe first step was to invite submissions of background work as asbasis for discussions leading to a consensus on the specificationswe planned to create There were 7 submissions by the time wemet for the first face to face meeting in Kaiserslautern Thefollowing subsections briefly reviews each in turn Furtherinformation can be found on the MBUI Wiki at

bull httpwwww3orgwikiMBUI_Submissions

341 Advanced Service Front-End Description Language(ASFE-DL)

bull httpwwww3org201201asfe-dl

This is a submission on behalf of the FP7 Serenoa project andcovers a meta-model and XML serialization for the abstract UIlayer of the Cameleon Reference Framework see

bull CAMELEON (Context Aware Modelling for Enabling andLeveraging Effective interactiON) Project(FP5-IST4-2000-30104) httpgioveisticnritprojectscameleonhtml

The ASFE-DL language is expected to evolve further during theremainder of the Serenoa project but the version submitted justfocuses on abstract user interface models and corresponds to thePlatform-Independent-Model ndash PIM in Model Driven Engineering(MDE) ASFE-DL draws upon experience with previous work onMARIA and UsiXML both of which were submitted separately tothe MBUI Working Group The idea behind ASFE-DL is to create aunified and more complete language combining the strengths ofthe two languages unifying concepts and adding new features thatwill allow this language to meet requirements for context awareadaptation of service front ends The ASFE-DL meta-model (for thesubmission) is defined by the following UML diagram

29

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 30: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

Different colours are used to highlight different parts of themetamodel sky-blue for the main structure of the interface greenfor the interactor hierarchy red for the classes that model therelationships between interactors and yellow for the classes thatmodel the UI behaviour

Loosely put ASFE-DL can be used to describe the user interface asa set of interrelated abstract dialogues (AbstractInteractionUnits)where each dialogue has a set of interactors for collecting userinput updating the domain model activating methods on thedomain model and navigating between dialogues ASFE-DLprovides a means to define handlers for a variety of events whichcan be triggered by user actions or by the system itself

342 The ConcurTaskTrees Notation (CTT)

httpwwww3org201202ctt

The ConcurTaskTrees (CTT) notation provides a metamodelvisualization and XML format for interchange of user interface taskmodels between different design tools CTT was developed by ISTI-

30

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 31: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

CNR and first published at INTERACT97 and since then has beenwidely used in academic and industrial institutions

Task models can be used in a variety of ways

bull Improve understanding of the application domainbull Record the result of interdisciplinary discussionbull Support effective designbull Support usability evaluationbull Support the user during a sessionbull Documentation

The aim of CTT is to provide fairly high level descriptions of userinterfaces It is not intended as a programming language anddeliberately omits details that would risk derailing high leveldesign discussions Extensions have been proposed for cooperativetask models involving multiple users

The notation covers

bull Hierarchical structuring of tasksbull Temporal relations between tasksbull Task allocation (user or system)bull Task preconditions

CTT task models are frequently depicted as a diagrams eg

31

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 32: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

The temporal operators are as follows

Operator SymbolEnabling T1 gtgt T2 or T1 []gtgt T2Disabling T1 [gt T2Interruption T1 |gt T2Choice T1 [] T2Iteration T1 or T1nConcurrency T1 ||| T2 or T1 |[]| T2Optionality [T]Order Independency T1 |=| T2

Where the second symbol for enabling is for task enabling withinformation passing Likewise the second symbol for concurrencyis for concurrent communicating tasks

Tasks can be allocated as follows

bull System - data presentation or action carried out by thesystem

bull User input - data entry by the userbull Cognition - a cognitive task carried out by the user

CTTs meta-model as a UML diagram

32

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 33: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

There is also an XML schema to support interchange of models inthe XML format

343 Useware Markup Language (UseML)

bull httpwwww3orgwikiUseware_Markup_Language_(UseML)

The Useware Markup Language (UseML) and dialog modellinglanguage (UseDM) have been developed to support the user andtask oriented Useware Engineering process and has been appliedto the domain of production automation and industrialenvironments The Useware process has the following steps

1 Analysis2 Structuring3 Design4 Realization

33

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 34: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

The following diagram illustrates the various kinds of modelsinvolved

The use model abstracts platform-independent tasks actionsactivities and operations into use objects that make up ahierarchically ordered structure Each element of this structure canbe annotated by attributes such as eligible user groups accessrights importance Use objects can be further structured into otheruse objects or elementary use objects Elementary use objectsrepresent the most basic atomic activities of a user such asentering a value or selecting an option

Currently five types of elementary use objects exist

bull Trigger starting calling or executing a certain function ofthe underlying technical device (eg a computer or fielddevice)

bull Select choosing one or more items from a range of givenones

bull Input entering an absolute value overwriting previousvalues

bull Output the user gathers information from the user interfacebull Change making relative changes to an existing value or item

The following diagram describes the UseDM meta-model

34

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 35: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

The presentation model covers the layout and style aspects for theelements given in the dialogue model The presentation model isspecified using the User Interface Markup Language (UIML) whichis covered in the following subsection of this report

344 User Interface Markup Language (UIML)

This was an indirect submission as UIML is the presentationlanguage for UseML UIML was developed by Marc Abrams et alin the late 1990s to address the challenges of developing for agrowing variety of target devices for user interfaces

UIML is an XML language for implementing user interfaces see[UIML] It combines appliance independent presentation conceptswith appliance dependant concepts Please refer to the followinglink for a discussion of the relationship of UIML to other interfacedescription languages

bull httpwwwoasis-openorgcommitteesdownloadphp3419The20Relationship20of20the20UIML20320v0103doc

Here is a pertinent extract

UIML was not intended as a UI design language but ratheras a language for UI implementation Therefore UI designtools could represent a design in a design language andthen transform a UI in a design language to a canonicalrepresentation for UI implementation namely UIML

UIML has been standardized by OASIS see

bull httpswwwoasis-openorgcommitteestc_homephpwg_abbrev=uiml

UIML describes a user interface with five sections descriptionstructure data style and events The template looks like

35

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 36: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

ltxml version=10 standalone=nogtltuiml version=20gt

ltinterface name= class=MyAppsgtltdescriptiongtltdescriptiongtltstructuregtltstructuregtltdatagtltdatagtltstylegtltstylegtlteventsgtlteventsgt

ltinterfacegt

ltlogicgtltlogicgt

ltuimlgt

The description element assigns a name and a class to each UIcomponent The structure element defines which components arepresent from the description and how they are organized as ahierarchy The data element binds to application dependent dataThe style element binds UI components to their implementationeg java classes such as javaawtMenuItem The events elementbinds events to actions You can use application dependent butappliance independent events and then bind them to appliancedependent events through the style element OASIS is currentlyworking on version 4 of the UIML specification

An longer introduction to UIML can be found at

bull httpwww8orgw8-papers5b-hypertext-mediauimluimlhtml

[UIML] M Abrams C Phanouriou A L Batongbacal S MWilliams und J E Shuster bdquoUIML An Appliance-Independent XMLUser Interface Languageldquo Journal Computer Networks TheInternational Journal of Computer and TelecommunicationsNetworkin Bd 31 Nr 11-16 S 1695-1708 1999

345 Abstract Interactor Model (AIM) Specification

bull httpwwwmulti-accessdemintaim201220120516

AIM focuses on modelling multimodal interactions in terms ofmodes and media

The three basic interactors are

36

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 37: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

1 Abstract Interactor Model describing behaviour common toall modes and media

2 Concrete Interactor Model describing user interface for acertain mode or medium

3 Interaction Resource Model - a database used to store andmanage interactor state

The following figure shows the interactor class and its relations tothe three basic interactors as a UML diagram

The abstract model distinguishes input from output and continuousfrom discrete interaction The AIM class model is as follows

37

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 38: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

AIM further makes use of W3Cs State Chart XML notation(SCXML) to describe interactor behaviour in terms of event basedstate transition User interface design involves two concepts(interactors and mappings) and three steps

1 Widget design2 Interaction design3 Mapping

AIM has been implemented using a range of web technologiesWebSockets HTML5 CSS3 Rails NodeJS RedisTupleSpace andMMI-Arch For more details see the link above

346 Multimodal Interactor Mapping (MIM) ModelSpecification

bull httpwwwmulti-accessdemintmim201220120203

This submission supplements the submission on Abstract InteractorModel (AIM) Specifications

Multimodal Mappings

Each multimodal mapping consists of

bull Observations - used to observe state charts (statemachines) for state changes

bull Actions - used to trigger state changes by sending events tostart charts or to call functions in the backend

bull Operators - specify multimodal relations and link a set ofobservations to a set of actions

There are six operators sequence redundance complementaryassignment and equivalence

Synchronization Mappings

These are predefined together with interactors

Exemplary Mappings

bull Drag and dropbull Gesture based navigation

38

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 39: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

347 UsiXML

bull httpwwww3orgwikiimages55dUsiXML_submission_to_W3Cpdf

bull httpwwww3orgwikiimages883UsiXMLSubmission-Kaiserslautern-Feb2012-Part1pdf

bull httpwwww3orgwikiimages33eUsiXMLSubmission-Kaiserslautern-Feb2012-Part2pdf

bull httpwwww3orgwikiimages996UsiXMLSubmission-Kaiserslautern-Feb2012-Part3pdf

The User Interface eXtensible Markup Language (UsiXML) is aXML-compliant markup language that describes the user interfacefor multiple contexts of use such as Character User Interfaces(CUIs) Graphical User Interfaces (GUIs) Auditory User Interfacesand Multimodal User Interfaces UsiXML has been defined by theUsiXML Consortium see

bull httpwwwusixmlorg

The semantics are defined in a UML 20 class diagram MOF-XMIand as an ontology using OWL-Full 20 The interchange syntax isXML and defined with an XML schema UsiXML is based upon theCAMELEON Reference Framework

Where task models can be defined with the ConcurTaskTree (CTT)notation and mapped to abstract user interface models(independent of devices and modalities) and thence to concreteuser interface models (designed for a class of devices and

39

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 40: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

modalities) and compiled into a final user interface for delivery tospecific device platform Domain models describe the interface tothe user interface back end in terms of properties and methods thatcan be invoked based upon user interaction Behaviour can bedescribed in terms of event driven state transition models usingW3Cs State Chart XML (SCXML)

You can define different kinds of mappings

bull Reification from high to lower-levelbull Abstraction from low to higher-levelbull Reflexion at the same level

Reflexion is useful for transcoding graceful degradationrestructuring and retasking

UsiXML defines context of use models

bull User models eg personal preferences and abilitiesbull Platform models eg device capabilitiesbull Environment models eg ambient light and noise

The UsiXML metamodel is as follows

UsiXML is accompanied with a plugin for the Eclipse IntegratedDevelopment Environment

Proposed UsiXML extension enabling the detailed description of theusers with focus on the elderly and disabled

bull httpwwww3orgwikiimagesff8An_extension_of_UsiXML_for_userspdf

This introduces a unified user modelling technique designed tosupport user interfaces for the elderly and disabled

Two new models are proposed for UsiXMLs uiModel

40

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 41: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

bull disability modelbull capability model

This covers the relationship between affected tasks and variouskinds of disabilities including both physical and cognitivedisabilities

348 MARIA

MARIA (Model-based language for Interactive Applications)[Paterno2000] is a universal declarative multiple abstraction-level XML-based language for modelling interactive applications inubiquitous environments

MARIA supports the CAMELEON framework with one language forthe abstract description (the so-called ldquoAbstract User Interfacerdquolevel in which the UI is defined in a platform ndashindependentmanner) and multiple platform-dependent languages (which are atthe level of the so-called ldquoConcrete User Interfacerdquo) which refinethe abstract one depending on the interaction resources at hand Examples of platforms are the graphical desktop the graphicalmobile the vocal platform etc

Abstract User Interface

The Abstract User Interface (AUI) level describes a UI only throughthe semantics of the interaction without referring to a particulardevice capability interaction modality or implementationtechnology

At the abstract level a user interface is composed of a number ofpresentations has an associated data model and can access anumber of external functions Each presentation is composed of anumber of interactors (basic interaction elements) and a set ofinteractor compositions

According to its semantics an interactor belongs to one thefollowing subtypes

bull Selection Allows the user to select one or more valuesamong the elements of a predefined list It contains theselected value and the information about the list cardinalityAccording to the number of values that can be selected theinteractor can be a Single Choice or a Multiple Choice

bull Edit Allows the user to manually edit the object representedby the interactor which can be text (Text Edit) a number(Numerical Edit) a position (Position Edit) or a genericobject (Object Edit)

41

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 42: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

bull Control Allows the user to switch between presentations(Navigator) or to activate UI functionalities (Activator)

bull Only output Represents information that is submitted tothe user not affected by user actions It can be a Descriptionthat represents different types of media an Alarm aFeedback or a generic Object

The different types of interactor-compositions are

bull Grouping a generic group of interactor elementsbull Relation a group where two or more elements are related

to each otherbull Composite Description represents a group aimed to

present contents through a mixture of Description andNavigator elements

bull Repeater which is used to repeat the content according todata retrieved from a generic data source

MARIA XML allows describing not only the presentation aspectsbut also the behaviour Data Model The interface definitioncontains description of the data types that are manipulated by theuser interface The interactors can be bound with elements thedata model which means that at runtime modifying the state of aninteractor will change also the value of the bound data element andvice-versa The main features available already at the abstract leveland common to all languages are

bull Data Model The interface definition contains description ofthe data types that are manipulated by the user interfaceThe interactors can be bound with elements the data modelwhich means that at runtime modifying the state of aninteractor will change also the value of the bound dataelement and vice-versa This mechanism allows themodelling of correlation between UI elements conditionallayout conditional connections between presentations inputvalues format The data model is defined using the standardXML Schema Definition constructs

bull Generic Back End The interface definition contains a set ofExternal Functions declarations which representsfunctionalities exploited by the UI but implemented by ageneric application back-end support (eg web servicescode libraries databases etc) One declaration contains thesignature of the external function that specifies its name andits inputoutput parameters

bull Event Model Each interactor definition has a number ofassociated events that allow the specification of UI reactiontriggered by the user interaction Two different classes ofevents have been identified the Property Change Events

42

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 43: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

that specify the value change of a property in the UI or in thedata model (with an optional precondition) and theActivation Events that can be raised by activators and areintended to specify the execution of some applicationfunctionalities (eg invoking an external function)

bull Continuous update of fields It is possible to specify that agiven field should be periodically updated invoking anexternal function

bull Dynamic Set of User Interface Elements The languagecontains constructs for specifying partial presentationupdates (dynamically changing the content of entiregroupings) and the possibility to specify a conditionalnavigation between presentationsThis set of new featuresallows having already at the abstract level a model of theuser interface that is not tied to layout details but it iscomplete enough for reasoning on how UI supports both theuser interaction and the application back end

Concrete User Interface

A Concrete User Interface (CUI) in MARIA XML provides platform-dependent but implementation language-independent details of aUI A platform is a set of software and hardware interactionresources that characterize a given set of devices MARIA XMLcurrently supports the following platforms

bull Desktop CUI s model graphical interfaces for desktopcomputers

bull Mobile CUI s model graphical interfaces for mobile devicesbull Multimodal Desktop CUI s model interfaces that combine the

graphical and vocal modalities for desktop computersbull Multimodal Mobile CUI s model interfaces that combine the

graphical and vocal modalities for mobile devicesbull Vocal CUI s interfaces with vocal message rendering and

speech recognition

Each platform meta-model is a refinement of the AUI whichspecifies how a given abstract interactor can be represented in thecurrent platform The followings paragraphs provide a briefdescription of the Desktop CUI and of the Vocal CUI

Concrete Desktop User Interface

A CUI meta-model for a given platform is an extension of the AUImeta-model which means that all the entities in the AUI still exist

43

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 44: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

in the CUI The extensions add the platform-dependent information(but still implementation language independent) to the structure ofthe corresponding AUI model for the same application interface byeither adding attributes or extending through an inheritancemechanism the existing entities for the specification of the possibleconcrete implementation of the abstract interactors In thisparagraph we will introduce the extension to the AUI meta-modelfor the definition of the Graphical Desktop CUI meta-model Theexisting elements with new attributes are Presentation it containsthe presentation_setting attribute which contains information onthe title background (color or image) and the font used Groupingit contains the grouping_setting attribute which contains theinformation on the grouping display technique (grid fieldsetbullet background color or image) and if the elements are relatedwith an ordering or hierarchy relation The classes which havebeen extended using the inheritance are the following

bull An Activator can be implemented as a button a text_linkimage_link image_map (an image with the definition of a setof areas each one associated with a different value) ormailto

bull An Alarm can be implemented as a text (a text with font andstyle information) or an audio_file

bull A Description can be implemented as a text image audiovideo table

bull A MultipleChoice can be implemented as a check_box or alist_box

bull A Navigator can be implemented as an image_linktext_link button image_map

bull A NumericalEditFull can be implemented as a text_field ora spin_box (a text field which includes also up and downbuttons)

bull A NumericalEditInRange can be implemented as atext_field a spin_box or a track_bar

bull A PositionEdit can be implemented as an image_mapbull A SingleChoice can be implemented as a radio_button

list_box drop_down_list or image_mapbull A TextEdit can be implemented as a text_field or a

text_area

Concrete Vocal User Interface

While in graphical interfaces the concept of presentation can beeasily defined as a set of user interface elements perceivable at agiven time (eg a page in the Web context) in the case of vocal

44

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 45: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

interfaces we consider a presentation as a set of communicationsbetween the vocal device and the user that can be considered as alogical unit eg a dialogue supporting the collection of informationregarding a user The AUI refinements for obtaining the Vocal CUIdefinition involves defining some elements that enable setting somepresentation properties In particular we can define the defaultproperties of the synthesized voice (eg volume tone) the speechrecognizer (eg sensitivity accuracy level) and the DTMF (Dual-Tone Multi-Frequency) recognizer (eg terminating DTMF char)

The following are the interactors refinements

bull An Alarm can be implemented as a pre-recorded soundbull A Description can be implemented as

speech which defines text that the vocal platformmust synthesize or the path where the platform canfind the text resources It is furthermore possible toset a number of voice properties such as emphasispitch rate and volume as well as age and gender ofthe synthesized voice Moreover we have introducedcontrol of behaviour in the event of unexpected userinput by suitably setting the element named barge inwe can decide if the user can stop the synthesis or ifthe application should ignore the event and continue

pre-recorded message which defines a pre-recordedaudio resource with an associate alternative contentin case of unavailability

bull A MultipleChoice can be implemented vocal selection Thiselement defines the question(s) to direct to the user and theset of possible user input that the platform can accept Inparticular it is possible to define textual input (word orsentences) or DTMF input In this version the interactoraccepts more than one choice

bull A SingleChoice can be implemented as a vocal selectionthat accepts only one choice

bull An Activator can be implemented as a command in order toexecute a script a submit to send a set of data to a serverand goto to perform a call to a script that triggers animmediate redirection

bull A Navigator can be implemented as a goto for automaticchange of presentation a link for user-triggered change ofpresentation and a menu for supporting the possibility ofmultiple target presentations

bull A TextEdit can be implemented as vocal textual inputelement which permits setting a vocal request andspecifying the path of an external grammar for the platformrecognition of the user input

45

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 46: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

bull A NumericalEditFull and NumericalEditInRange can beimplemented as a vocal numerical input which accepts onlynumbers (in a range in the latter case) specified through agrammar

bull An ObjectEdit can be implemented as a record elementwhich allows specifying a request and storing the user inputas an audio resources It is possible to define a number ofattributes relative to the recording such as beep to emit asound just before recording maxtime to set the maximumduration of the recording and finalsilence to set the intervalof silence that indicates the end of vocal input Recordelements can be used for example when the user inputcannot be recognised by a grammar (eg a sound)

With respect to the composition of interactors the Vocal CUI hasfour solutions that permits to identify the beginning and the end ofgrouping

bull Inserting a sound at the beginning and at the end of thegroup

bull Inserting a pause which must be neither too short (useless)nor too long (slow system feedback)

bull Change the synthesis properties (such as volume andgender)

bull insert keywords that explicitly define the start and the end ofthe grouping

Another substantial difference of vocal interfaces is in the eventmodel While in the case of graphical interfaces the events arerelated mainly to mouse and keyboard activities in vocal interfaceswe have to consider different types of events noinput (the user hasto enter a vocal input but nothing is provided within a definedamount of time) nomatch the input provided does not match anypossible acceptable input and help when the user asks for support(in any platform specific way) in order to continue the session Allof them have two attributes message indicating what messageshould be rendered when the event occurs and re-prompt toindicate whether or not to synthesize the last communication again

[Paterno2000] F Paternograve C Santoro LD Spano MARIA AUniversal Language for Service-Oriented Applications inUbiquitous Environment ACM Transactions on Computer-HumanInteraction Vol16 N4 November 2009 pp191-1930 ACMPress

46

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 47: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

35 MBUI WG Note - Introduction to Model-BasedUI Design

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides introductory material describing model-baseduser interface design its benefits and limitations and a range ofillustrative use cases

36 MBUI WG Note - Glossary of Terms

This document is currently in preparation and is expected to bepublished as a W3C Working Group Note in October 2012 Thedocument provides definitions for a range of terms used for model-based user interface design and is targeted at would be adoptersof model-based user interface design techniques In working on thisdocument we have noticed that different practitioners of model-based user interface design techniques often use slightly differentterminology and moreover there is an understandable tendency forthis to be focused on the needs of academic study as opposed tothat of industrial users We have therefore taken a selectiveapproach to which terms we are including in the glossary

37 MBUI WG Specification - Task Models for Model-Based UI Design

bull httpwwww3orgTR2012WD-task-models-20120802

This is a specification document that the Model-Based UserInterfaces Working Group is progressing along the W3CRecommendation Track with a view to attaining Recommendationstatus by the end of the current charter period (November 2013)The First Public Working Draft was published on 2nd August 2012

The specification is based upon the ConcurTaskTree (CTT) notationand refines the metamodel introduced in earlier versions of CTT

47

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 48: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

The refinements include the introduction of postconditions andadjustments to the set of temporal operators

bull Choicebull Order independencebull Interleavingbull Parallelismbull Synchronizationbull Disablingbull Suspend resumebull Enabling

The specification provides a normative metamodel as a UML 20class diagram along with an easy to read textual alternative forpeople who cant see the diagram An XML schema is provided asan interchange format although we envisage the use of otherformats eg JavaScript Structured Object Notation (JSON) Thegraphical notation commonly used for CTT is considered to beoptional and not a normative part of the specification

The document concludes with a table showing which operators aresupported by a range of task modelling languages It is interestingto note that whilst all of the languages considered supportenabling very few support disabling (deactivation) and evenfewer support suspend and resume The latter is considered to becritical for automotive user interfaces where the issue of driverdistraction is a major consideration It is essential to be able tosuspend a user interface in favour or safety critical services egalerts of upcoming hazards The original user interface can beresumed once the hazard has been passed

48

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 49: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

38 MBUI WG Specification - Abstract User InterfaceModels

This document is currently in preparation and is expected to bepublished as a W3C Working Draft in October 2012 The documentwill specify a metamodel and interchange format for abstract userinterface models This is taking longer than originally envisaged inthe Working Group Charter due to the need to assimilate ideasfrom all of the various Working Group submissions and to reach abroad consensus on a merged approach

The following diagram presents the metamodel as of the beginningof August 2012 and as such is likely to be subject to revision in theFirst Public Working Draft

39 MBUI WG Future Plans

The W3C Model-Based User Interfaces Working Group is currentlychartered until November 2013 During this period we areattempting to standardize metamodels and interchange formats fortask models and abstract user interface models We are alsoworking on supplementary information covering the rationale for

49

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 50: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

adopting model-based user interface design techniques exemplaryuse cases and a glossary of terms

If we are successful further opportunities for standardizationinclude

bull metamodels and interchange formats for the context of usebull rule languages for mappings between layers in the

CAMELEON Reference Framework and for adaptation to thecontext of use at both design time and run-time

bull metamodels and interchange formats for the Concrete UserInterface

Whether the W3C Model-Based User Interfaces Working Group isrechartered when its current charter expires will depend ongreater engagement with industry This makes it essential for theSerenoa Project to focus on exploitation in its final year

4 CoDeMoDIS proposal for a COSTActionCOST (European Cooperation in Science and Technology) is a longrunning intergovernmental framework supporting cooperationamong scientists and researchers across Europe

bull httpwwwcosteu

A proposal for a COST Action has been prepared to supportcollaboration on Model Driven Engineering (MDE) and Model-Based User Interface Development (MBUID) If approved this willfoster continued collaboration beyond the end of the Serenoaproject and provide an opportunity for supporting involvement infurther work on standardization in addition to work onharmonization between the currently distinct fields of MDE andMBUID The proposal plans to set up working groups on thefollowing topics

bull Taxonomy of Model-Driven Engineering of InteractiveSystems

bull Comparative Analysis of Models Methods and RelatedTechnologies

bull Software Support for Model-Driven Engineering ofInteractive Systems

bull Harmonization and Unification of Standardisation Efforts

In addition a Standardization Coordinator would be assigned inorder to coordinate all efforts towards standardization

50

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 51: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

The participants behind the proposal come from a broad range ofcountries including Austria Belgium Bulgaria SwitzerlandCyprus Czech Republic Germany Denmark Estonia GreeceSpain Finland France Croatia Hungary Iceland Ireland IsraelItaly Luxembourg Macedonia Norway Poland Portugal RomaniaSerbia Sweden Slovenia Slovakia United Kingdom ArgentinaJapan New Zealand and the United States The proposer is DrGerrit Meixner German Research Center for Artificial Intelligence(DFKI) Kaiserslautern Germany

The pre-proposal was accepted and the full proposal submitted forreview at the end of July 2012

5 ISO 24744 standardisation actionISOIEC 24744 Software Engineering mdash Metamodel forDevelopment Methodologies is an international standard focusingon the use of meta models for software development methodologiesfor information-based domains

bull httpwwwisoorgisocatalogue_detailhtmcsnumber=38854

A standardization action has been suggested to harmonize the ISO24744 methodologies with Model-Based User InterfaceDevelopment techniques At this point this is very much in theearly planning stage

6 ConclusionsThis report surveys the standardization prospects for ideasemerging from the Serenoa project and describes the progressmade in the W3C Model-Based User Interfaces Working Group AFirst Public Working Draft has been published for task models andwill soon be followed by another for abstract user interface modelbased upon a synthesis of ideas from a range of submissions to theWorking Group The aim is to progress these to W3CRecommendations by the time the Working Groups Charter drawsto an end in November 2013

A major challenge will be to convince industry of the practical valueof model-based user interface design techniques and this willrequire investment in developing robust tools and run-timeenvironments as well as outreach on the business case foradoption The Serenoa project is playing a key role in supportingthis work but further investment will be needed beyond the end ofthe project if Europe is to realize the opportunities for exploiting

51

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References
Page 52: Standarization Actions Report - Europa · 2017-04-20 · Standarization Actions Report Project no. FP7 - 258030 Deliverable D6.2.1 Executive Summary This document provides a description

model-based user interface design techniques This is all the moreimportant given the current trend towards a wider range of userinterface technologies and device platforms Further work shouldalso take into account the emergence of the Internet of Things as adriver for new kinds of user interfaces along with the importanceof multilingual user interfaces to support interaction in peoplesnative languages

7 Referencesbull Limbourg Q Vanderdonckt J Michotte B Bouillon L and

Loacutepez-Jaquero V USIXML A language supporting multi-pathdevelopment of user interfaces Engineering HumanComputer Interaction and Interactive Systems Springer2005 134-135

bull Paternograve F Santoro C and Spano L D MARIA A universaldeclarative multiple abstraction-level language for service-oriented applications in ubiquitous environments ACMTrans Comput-Hum Interact ACM 2009 16 191-1930

bull Souchon N and Vanderdonckt J A Review of XML-Compliant User Interface Description Languages Proc of10th Int Conf on Design Specification and Verification ofInteractive Systems DSV-IS2003 (Madeira 4-6 June 2003)Jorge J Nunes NJ Falcao e Cunha J (Eds) Lecture Notesin Computer Science Vol 2844 Springer-Verlag Berlin2003 pp 377-391

52

  • Standarization Actions Report
  • Deliverable D621
    • Executive Summary
    • Table of Contents
    • Introduction
    • Potential opportunities for standardization
      • Task Models
      • Domain Models
      • Abstract UI Models
      • Concrete UI Models
        • WIMP (desktop GUI)
        • Touch-based GUI (smart phones and tablets)
        • Vocal UI
          • State Chart extensible Markup Language (SCXML)
            • Multimodal UI
            • Industrial UI
              • Context of Use
                • General Considerations
                • Industry Fulfilment of Safety Guidelines
                • Automotive Mitigation of Driver Distraction
                  • Multidimensional Adaptation of Service Front Ends
                    • CARF Reference Framework
                    • CADS Design Space
                    • CARFO Multidimensional Adaptation Ontology
                      • Design-time adaptation rules
                      • Run-time adaptation rules
                      • Advanced Adaptation Logic Description Language (AAL-DL)
                      • Corporate Rules for Consistent User Experience
                        • W3C Model-Based UI Working Group
                          • MBUI WG - Introduction
                          • MBUI WG History
                            • MBUI Incubator Group
                            • MBUI Workshop
                            • Formation of MBUI Working Group
                              • MBUI Working Group Charter
                                • Work Items
                                  • MBUI Submissions
                                    • Advanced Service Front-End Description Language (ASFE-DL)
                                    • The ConcurTaskTrees Notation (CTT)
                                    • Useware Markup Language (UseML)
                                    • User Interface Markup Language (UIML)
                                    • Abstract Interactor Model (AIM) Specification
                                    • Multimodal Interactor Mapping (MIM) Model Specification
                                      • Multimodal Mappings
                                      • Synchronization Mappings
                                      • Exemplary Mappings
                                        • UsiXML
                                          • Proposed UsiXML extension enabling the detailed description of the users with focus on the elderly and disabled
                                            • MARIA
                                              • Abstract User Interface
                                              • Concrete User Interface
                                                • Concrete Desktop User Interface
                                                • Concrete Vocal User Interface
                                                  • MBUI WG Note - Introduction to Model-Based UI Design
                                                  • MBUI WG Note - Glossary of Terms
                                                  • MBUI WG Specification - Task Models for Model-Based UI Design
                                                  • MBUI WG Specification - Abstract User Interface Models
                                                  • MBUI WG Future Plans
                                                    • CoDeMoDIS proposal for a COST Action
                                                    • ISO 24744 standardisation action
                                                    • Conclusions
                                                    • References