unified user interface development: the software engineering of … · 2011. 3. 14. · a third...

29
LONG PAPER Anthony Savidis Constantine Stephanidis Unified user interface development: the software engineering of universally accessible interactions Published online: 4 May 2004 Ó Springer-Verlag 2004 Abstract In the information society, the notion of ‘‘computing-platform’’ encompasses, apart from tradi- tional desktop computers, a wide range of devices, such as public-use terminals, phones, TVs, car consoles, and a variety of home appliances. Today, such computing platforms are mainly delivered with embedded operating systems (such as Windows CE, Embedded/ Personal Java, and Psion Symbian), while their operational capabilities and supplied services are controlled through software. The broad use of such computing platforms in everyday life puts virtually anyone in the position of using interactive software applications in order to carry out a variety of tasks in a variety of contexts of use. Therefore, traditional development processes, targeted towards the elusive ‘‘average case’’, become clearly inappropriate for the purposes of addressing the new demands for user- and usage-context diversity and for ensuring accessible and high-quality interactions. This paper will introduce the concept of unified user inter- faces, which constitutes our theoretical platform for universally accessible interactions, characterized by the capability to self-adapt at run-time, according to the requirements of the individual user and the particular context of use. Then, the unified user interface devel- opment process for constructing unified user interfaces will be described, elaborating on the interactive-software engineering strategy to accomplish the run-time self- adaptation behaviour. Keywords Development processes Software engineering Unified user interfaces User-adapted interfaces User interface architectures 1 Introduction Today, software products support interactive behav- iours that are biased towards the ‘‘typical’’, or ‘‘average’’ able-bodied user, familiar with the notion of the ‘‘desktop’’ and the typical input and output peripherals of the personal computer. This has been the result of software developers’ assumptions regarding the target user groups, the technological means at their disposal and the type of tasks supported by computers. Thus, the focus has been on ‘‘knowledgeable’’ workers, capable and willing to use technology in the work environment, to experience productivity gains and performance improvements. Though the information society is still in its infancy, its progressive evolution has already invalidated (at least some of) the assumptions in the above scenario. The fusion between information technologies, telecommuni- cations and consumer electronics has introduced radical changes to traditional markets and complemented the business demand with a strong residential component. At the same time, the type and context of use of inter- active applications is radically changing, due to the increasing availability of novel interaction technologies (e.g., personal digital assistants (PDAs), kiosks, cellular phones and other network-attachable equipment) which progressively enable nomadic access to information. The above paradigm shift poses several challenges: users are no longer only the traditional able-bodied, skilled and computer-literate professionals; product developers can no longer know who their target users will be; information is no longer relevant only to the business environment; and artefacts are no longer bound to the technological specifications of a pre-defined interaction platform. In this context, users are poten- tially all citizens of an emerging information society who demand customized solutions to obtain timely access to virtually any application, from anywhere, at any time. This paper will introduce the concept of unified user interfaces and point out some of the distinctive A. Savidis C. Stephanidis (&) Foundation for Research and Technology - Hellas, Institute of Computer Science, Science and Technology Park of Crete, GR-71110 Heraklion, Crete, Greece E-mail: [email protected] Tel.: +30-2810-391741 Fax: +30-2810-391740 Univ Access Inf Soc (2004) 3: 165–193 DOI 10.1007/s10209-004-0096-8

Upload: others

Post on 17-Dec-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

LONG PAPER

Anthony Savidis Æ Constantine Stephanidis

Unified user interface development: the software engineeringof universally accessible interactions

Published online: 4 May 2004� Springer-Verlag 2004

Abstract In the information society, the notion of‘‘computing-platform’’ encompasses, apart from tradi-tional desktop computers, a wide range of devices, suchas public-use terminals, phones, TVs, car consoles, and avariety of home appliances. Today, such computingplatforms are mainly delivered with embedded operatingsystems (such as Windows CE, Embedded/ PersonalJava, and Psion Symbian), while their operationalcapabilities and supplied services are controlled throughsoftware. The broad use of such computing platforms ineveryday life puts virtually anyone in the position ofusing interactive software applications in order to carryout a variety of tasks in a variety of contexts of use.Therefore, traditional development processes, targetedtowards the elusive ‘‘average case’’, become clearlyinappropriate for the purposes of addressing the newdemands for user- and usage-context diversity and forensuring accessible and high-quality interactions. Thispaper will introduce the concept of unified user inter-faces, which constitutes our theoretical platform foruniversally accessible interactions, characterized by thecapability to self-adapt at run-time, according to therequirements of the individual user and the particularcontext of use. Then, the unified user interface devel-opment process for constructing unified user interfaceswill be described, elaborating on the interactive-softwareengineering strategy to accomplish the run-time self-adaptation behaviour.

Keywords Development processes ÆSoftware engineering Æ Unified user interfaces ÆUser-adapted interfaces Æ User interface architectures

1 Introduction

Today, software products support interactive behav-iours that are biased towards the ‘‘typical’’, or ‘‘average’’able-bodied user, familiar with the notion of the‘‘desktop’’ and the typical input and output peripheralsof the personal computer. This has been the result ofsoftware developers’ assumptions regarding the targetuser groups, the technological means at their disposaland the type of tasks supported by computers. Thus, thefocus has been on ‘‘knowledgeable’’ workers, capableand willing to use technology in the work environment,to experience productivity gains and performanceimprovements.

Though the information society is still in its infancy,its progressive evolution has already invalidated (at leastsome of) the assumptions in the above scenario. Thefusion between information technologies, telecommuni-cations and consumer electronics has introduced radicalchanges to traditional markets and complemented thebusiness demand with a strong residential component.At the same time, the type and context of use of inter-active applications is radically changing, due to theincreasing availability of novel interaction technologies(e.g., personal digital assistants (PDAs), kiosks, cellularphones and other network-attachable equipment) whichprogressively enable nomadic access to information.

The above paradigm shift poses several challenges:users are no longer only the traditional able-bodied,skilled and computer-literate professionals; productdevelopers can no longer know who their target userswill be; information is no longer relevant only to thebusiness environment; and artefacts are no longer boundto the technological specifications of a pre-definedinteraction platform. In this context, users are poten-tially all citizens of an emerging information society whodemand customized solutions to obtain timely access tovirtually any application, from anywhere, at any time.

This paper will introduce the concept of unified userinterfaces and point out some of the distinctive

A. Savidis Æ C. Stephanidis (&)Foundation for Research and Technology - Hellas,Institute of Computer Science, Science and Technology Parkof Crete, GR-71110 Heraklion, Crete, GreeceE-mail: [email protected].: +30-2810-391741Fax: +30-2810-391740

Univ Access Inf Soc (2004) 3: 165–193DOI 10.1007/s10209-004-0096-8

Page 2: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

properties that render it an effective approach towardsuniversal access within the information society. Subse-quently, the unified user interface development ap-proach will be described as an approach conveying anew perspective on the development of user interfaces; itprovides a principled and systematic approach towardscoping with diversity in the target users groups, tasksand environments of use. In other words, unified userinterface development provides a pathway towardsaccommodating the interaction requirements of thebroadest possible end-user population and contexts ofuse.

The notion of unified user interfaces originated fromresearch efforts aiming to address the issues of accessi-bility and interaction quality for people with disabilities[1]. The primary intention was to articulate some of thekey principles of design for all in a manner that would beapplicable and useful to the conduct of Human-Com-puter Interaction (HCI). Subsequently, these principleshave been extended, appropriately adapted, comparedwith existing techniques, intensively tested and validatedin the course of several projects, and formulated in anengineering code of practice that depicts a concreteproposition for interface development in the light ofuniversal access.

2 The concept of unified user interfaces

A unified user interface is the interaction-specific soft-ware of applications or services which is capable of self-adapting to the individual end user requirements andcontexts of use. Such an adaptation may reflect varyingpatterns of interactive behaviour, at the physical, syn-tactic or semantic level of interaction, to accommodatespecific user- and context-oriented parameters.

Practically speaking, from the end-user point ofview, a unified user interface is actually an interfacethat can automatically adapt to the individual userattributes (e.g., requirements, abilities, and prefer-ences), as well as to the particular characteristics of theusage-context (e.g., computing platform, peripheraldevices, interaction technology and surrounding envi-ronment). Therefore, a unified user interface realisesthe combination of:

– User-adapted behaviour, i.e., the automatic delivery ofthe most appropriate user interface for the particularend-user (user awareness);

– Usage-context adapted behaviour, i.e., the automaticdelivery of the most appropriate user interface for theparticular situation of use (usage context awareness).

Hence, the characterisation ‘‘unified’’ does not haveany particular behavioural connotation, at least as seenfrom an end-user perspective. Instead, the notion of‘‘unification’’ reflects the specific software engineeringstrategy needed to accomplish this behaviour, emphas-ising the proposed development-oriented perspective.

More specifically, in order to realise this form of adaptedbehaviour, a unified user interface reflects the followingfundamental development properties:

– It encapsulates alternative dialogue patterns (i.e.,implemented dialogue artefacts), for various dialoguedesign contexts (i.e., a sub-task, a primitive user ac-tion, a visualisation), appropriately associated to thedifferent values of user- and usage-context relatedattributes. The need for such alternative dialoguepatterns is dictated by the design process: given anyparticular design context, for different user- andusage-context attribute values, alternative design ar-tefacts are needed to accomplish optimal interaction;

– It encapsulates representation schemes for user- andusage-context parameters, internally utilising user-and usage-context information resources (e.g., repos-itories, servers), to extract or to update user- andusage-context information.

– It encapsulates the necessary design knowledge anddecision making capability for activating, during run-time, the most appropriate dialogue patterns (i.e.,interactive software components), according to par-ticular instances of user- and usage-context attributes.

The property of unified user interfaces to encapsulatealternative, mutually exclusive, design artefacts, whichcan be purposefully designed and implemented for aparticular design context, constitutes one of the maincontributions of this research work within the interfacesoftware engineering arena. As it will be discussed insubsequent sections, this notion of adaptation has notbeen supported until now. Previous work in adaptiveinterfaces has put emphasis primarily on adaptiveinterface updates, driven from continuous interactionmonitoring and analysis, rather than on optimal designinstantiation before interaction begins.

A second contribution concerns the advanced tech-nical properties of unified user interfaces, associatedwith the encapsulation of interactive behaviours targetedto the various interaction technologies and platforms.This requires, from the engineering point of view: (a) theorganisation of alternative dialogue components on thebasis of their corresponding dialogue design context(i.e., all dialogue alternatives of a particular sub-task areplaced around the same run-time control unit); (b) theuse of different interface toolkit libraries (e.g., Java forgraphical user interfaces (GUIs) and HAWK [33] fornon-visual interaction), and (c) embedding alternativedialogue control policies, since the different toolkits mayrequire largely different methods for interaction man-agement (e.g., interaction object control, display facili-ties, direct access to I/O devices). Those issues have notbeen explicitly addressed by previous work on adaptivesystems; none of the known systems has been targeted insupporting universally accessible interactions, since thevarious known demonstrators and their particulararchitectures were clearly focused on the GUI interfacedomain.

166

Page 3: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

A third contribution concerns the specific architec-tural proposition that is part of the unified user inter-face development approach. This proposition presentsa code of practice that provides an engineering ‘‘blue-print’’ for addressing the relevant software engineeringissues: distribution of functional roles, component-based organisation, and support for extension andevolution. In this context, the specific engineering issuesaddressed by the proposed code of practice are asfollows:

– How are implemented dialogue components organ-ised, where do they reside and which are the algo-rithmic control requirements in order to accomplishrun-time adaptation?

– How should interaction monitoring be organised, andwhat are the algorithmic requirements for the controlof monitoring during interaction?

– Which are the steps involved in employing existingimplemented interactive software, and how can it beextended or embedded in the context of a unified userinterface implementation?

– Where is user-oriented information stored and in whatform; in what format is user-oriented informationcommunicated and between which components?

– What components are required to dynamically iden-tify information regarding the end user?

– During run-time, a unified user interface decideswhich dialogue components should comprise thedelivered interface, given a particular end user andusage context; where is the implementation of suchadaptation-oriented decision-making logic encapsu-lated and in what representation or form?

– How are adaptation decisions communicated andapplied and what are the implications for the orga-nisation of implemented dialogue components?

– What are the communication requirements betweenthe various components of the architecture to performadaptation?

– How is dynamic interface update performed, based ondynamically detected user attributes?

– Which of the components of the architecture are re-usable?

– How are the components of the architecture affected ifthe unified user interface is extended to addressadditional user- and usage-context parameter values?

– Which implementation practice and tool is bettersuited for each component of the architecture?

In this paper, the various aspects of the architecturewill be discussed on the basis of a specific unified inter-active application that is built following this code ofpractice, namely the AVANTI browser [38]. The targetuser audience for the AVANTI browser1 included: able-bodied, motor-impaired and blind users, with differentcomputer literacy expertise, in different contexts of use(office, home, public terminals at stations/airports,

PDAs, etc). The interface implementation of the AV-ANTI browser reached 80 KLOCs (thousands of lines ofcode), and served as the main testbed for validating theunified user interface development approach. The dis-cussion will be based on specifically chosen adaptationscenarios, currently not implemented by existing Webbrowsers, elaborating on the way they have been archi-tecturally structured and addressing lower-level engi-neering strategies.

3 Related work

Previous efforts in development methods, some ofwhich were explicitly targeted towards universallyaccessible interactions, fall under four general catego-ries: (a) research work related to user interface tools,concerning prevalent architectural practices for inter-active systems; (b) developments targeted to enablingthe user interface to dynamically adapt to end users byinferring particular user attribute values during inter-action (e.g., preferences, application expertise); (c)special purpose tools and architectures developed toenable the interface to cater for alternative modalitiesand interaction technologies; and (d) alternative accesssystems, concerning developments that have beentargeted to ‘‘manual’’ or ‘‘semi-automatic’’ accessibil-ity-oriented adaptations of originally non-accessiblesystems or applications.

3.1 User interface software architectures

Following the introduction of GUIs, early work in userinterface software architectures had focused on windowmanagers, event mechanisms, notification-based archi-tectures and toolkits of interaction objects. Thosearchitectural models were quickly adopted by main-stream tools, thus becoming directly encapsulated withinthe prevailing user interface software and technology(UIST). Today, all available user interface developmenttools support object hierarchies, event distributionmechanisms, and callbacks at the basic implementationmodel. In addition to these early attempts in identifyingarchitectural components of user interface software,there have been other architectural models, with asomewhat different focus, which, however, did not gainas much acceptance in the commercial arena as wasoriginally expected. The Seeheim Model [16], and itssuccessor, the Arch Model [43], have been mainly de-fined with the aim to preserve the so called ‘‘principle ofseparation’’ between the interactive and the non-inter-active code of computer-based applications. Thesemodels became popular as a result of the early researchwork on User Interface Management Systems (UIMS)[26], while in the domain of universal access systems theydo not provide concrete architectural and engineeringguidelines other than those related to the notion ofseparation.

1 The AVANTI browser has been developed in the context of theAVANTI project.

167

Page 4: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

Apart from these two architectural models, mainlyreferring to the inter-layer organization aspects ofinteractive applications, there have been two other moreimplementation-oriented models with an object-orientedflavour: the Model View Controller (MVC) model [14,24] and the Presentation-Abstraction-Control (PAC)model [10]. Those models focus on intra-layer softwareorganization policies, by providing logical schemes forstructuring the implementation code. All four models,though typically referred to as architectural frameworks,are today considered as meta-models, following theintroduction of the term for UIMS models in [43]. Theyrepresent abstract families of architectures, since they donot meet the fundamental requirements of a concretesoftware architecture, as defined by [19]: ‘‘an architec-ture should provide a structure, as well as the interfacesbetween components, by defining the exact patterns bywhich information is passed back and forth throughthese interfaces’’. Additionally, the key flavour in thoseapproaches is the separation between semantic contentand representation form, by proposing engineeringpatterns enabling a clear-cut separation, so that flexiblevisualizations and multiple-views can be easily accom-plished.

3.2 Adaptive interaction

Most of the existing work regarding system-driven, user-oriented adaptation concerns the capability of an inter-active system to dynamically (i.e., during interaction)detect certain user properties, and accordingly decidevarious interface changes. This notion of adaptationfalls in the adaptivity category, i.e., adaptation per-formed after the initiation of interaction, based oninteraction monitoring information; this has been com-monly referred to as adaptation during use. Althoughadaptivity is considered a good approach to meetingdiverse user needs, there is always the risk of confusingthe user with dynamic interface updates. Work onadaptive interaction has addressed various key technicalissues concerning the derivation of appropriate con-structs for the embodiment of adaptation capabilitiesand facilities in the user interface; most of those tech-nical issues, if relevant to unified user interfaces, will beappropriately discussed in subsequent sections of thispaper.

Overall, in the domain of adaptive systems, therehave been mainly two types of research work: (a) theo-retic work on architectures; and (b) concrete workregarding specific systems and components demon-strating adaptive behaviour. On the one hand, theoreticwork addressed high-level issues, rather than specificengineering challenges, such as the ones identified earlierin this paper. Those architectures convey useful infor-mation for the general system structure and potentialcontrol flow, mainly helping to understand the system asa whole. However, since they are mainly conceptualmodels, they cannot be deployed to derive the detailed

software organisation and the regulations for run-timecoordination, as it is needed in the software implemen-tation process.

On the other hand, developments in adaptive systemshave been mainly focused on particular parts or aspectsof the adaptation process, such as user modeling ordecision-making, and less on usage-context adaptationor interface component organisation. Additionally, theinter-linking of the various components, together withthe corresponding communication policies employed,primarily reflected the specific needs of the particularsystems developed, rather than a generalised approachfacilitating re-use. Finally, while most known adaptivesystems addressed the issue of dynamic interface adap-tation, there has been no particular attention to thenotion of individualised dynamic interface assemblybefore initiation of interaction. This implies that theinterface can be structured on-the-fly by dialogue com-ponents maximally fitting end-user and usage-contextattributes.

This notion of optimal interface delivery beforeinteraction is initiated constitutes the most importantissue in universally accessible interactions; unless theinitial interface delivered to the user is directly accessi-ble, there is no point to apply adaptivity methods, sincethe latter assumes that interaction is already takingplace.

3.2.1 Which architecture for adaptivity?

Although concrete software architectures for adaptiveuser interfaces have not been clearly defined, there existvarious proposals as to what should be incorporated in acomputable form into an adaptive interactive system. In[11], these are characterised as ‘‘categories of comput-able artefacts’’, and are summarised as being ‘‘the typesof models that are required within a structural model ofadaptive interactive software’’. An appropriate referenceto [18] is also made, regarding structural interfacemodels:

Structural descriptive models of the human-computerinterface...serve as frameworks for understanding theelements of interfaces and for guiding the dialoguedeveloper in their construction.

However, developers require concrete softwarearchitectures for structuring and engineering interactivesystems, and software systems in general. From thispoint if view, the information provided in [11] does notfulfil the requirements of an interface structural model,as defined in [18], nor of a software architecture, as de-fined in [19] and [25]. This fact supports the initialargument that a concrete, generic architectural frame-work for adaptive interfaces, and automatically adaptedinteractions as a broader category, is not yet available.This argument will be further elaborated on, as variousaspects of existing work on adaptive interfaces areincrementally analysed in the subsequent sections.

168

Page 5: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

3.2.2 User models versus user modeling frameworks

In all types of systems aiming to support user adaptivityin performing certain tasks, both embedded user modelsand user-task models have played a significant role. In[23], an important distinction is made between the usermodeling component, encompassing methods to repre-sent user-oriented information, and the particular usermodels as such, representing an instance of the knowl-edge framework for a particular user (i.e., an individualuser model), or user group (i.e., a stereotype model).However, this distinction explicitly associates the usermodel with the modeling framework, thus necessarilyestablishing a dependency between the adaptation-tar-geted decision-making software (which would need toprocess user models) and the overall user-modelingcomponent. This remark reveals the potential architec-tural hazard of rendering an adaptive system ‘‘mono-lithic’’: since the user model is linked directly with themodeling component, and decision-making is associatedwith user models, it may be deemed necessary orappropriate that all such knowledge categories bephysically located together.

More recent work has reflected the technical benefitsof physically splitting the user-modeling componentfrom the adaptive inference core, putting the responsi-bility of decision making directly to applications, whileenabling remote sharing of user modeling servers bymultiple clients [22].

3.2.3 Alternative dialogue patterns and the needfor abstraction

The need for explicit design as well as the run-timeavailability of design alternatives has been alreadyidentified in the context of interface adaptation [5]. Inview of the need for managing alternative interactionpatterns, the importance of abstractions has been iden-tified, starting from the observation that design alter-natives constructed with an adaptation perspective arelikely to exhibit some common dialogue structures. In [7]it is pointed out that ‘‘flexible abstractions for executabledialogue specifications’’ are a ‘‘necessary condition forthe success of adaptable human-computer interfaces’’.This argument implies that an important element in thesuccess of adaptive systems is the provision of imple-mented mechanisms of abstraction in interactive soft-ware, allowing the flexible run-time manipulation ofdialogue patterns.

3.2.4 Dynamic user attribute detection

The most common utilization of internal dialogue rep-resentation has involved the collection and processing ofinteraction monitoring information. Such information,gathered at runtime, is analysed internally (throughdifferent types of knowledge processing) to derivecertain user attribute values (not known prior to theinitiation of interaction), which may drive appropriate

interface adaptivity actions. A well-known adaptivesystem employing such techniques is MONITOR [3].Similarly, for the purpose of dynamic detection of userattributes, a monitoring component, in conjunction witha UIMS, is employed in the AIDA system [9]. Animportant technical implication in this context is thatdialogue modeling must be combined with user models.Thus, as discussed earlier, it becomes inherently associ-ated with the user-modeling component, as well as withadaptation-targeted decision-making software. Effec-tively, this ‘‘biases’’ the overall adaptive system archi-tecture towards a monolithic structure, turning thedevelopment of adaptive interface systems into a rathercomplicated software engineering process. It is arguedthat such an engineering tactic, placing all those com-ponents together within a monolithic system imple-mentation, is a less than optimal architectural option.Moreover, considering that this approach has beenadopted to address the issue of dynamic user attributedetection, it will be shown that a more flexible, distrib-uted, and re-use enabled engineering structure can beadopted to effectively pursue the same goal.

This argument is also supported by the fact that inmost available interactive applications, internal execut-able dialogue models exist only in the form of pro-grammed software modules. Higher-order executabledialogue models (which would reduce the need for low-level programming), as those previously mentioned,have been supported only by research-oriented UIMStools. Conversely, the outcome of interface developmentenvironments, like Visual Basic, is, at present, in a formmore closely related to the implementation world, ren-dering the extraction of any design-oriented contextdifficult or impossible. Hence, on the one hand, dynamicuser attribute detection will necessarily have to engagedialogue-related information, while on the other hand, itis unlikely that such required design information ispractically extractable from the interaction controlimplementation.

3.2.5 System-initiated actions to perform adaptivity

The final step in a run-time adaptation process is theexecution of the necessary interface updates at thesoftware level. In this context, four categories of system-initiated actions to be performed at the dialogue controllevel have been distinguished [8], for the execution ofadaptation decisions: (i) enabling (i.e., the activation ordeactivation of dialogue components); (ii) switching (i.e.,selecting one from various alternative pre-configuredcomponents); (iii) re-configuring (i.e., modifying dia-logue by using pre-defined components); and (iv) editing(i.e., no restrictions on the type of interface updates).The preceding categorization represents more a theo-retical perspective, rather than reflecting an interfaceengineering one. Furthermore, the term ‘‘component’’denotes mainly visual interface structures, rather thanreferring to implemented sub-dialogues, includingphysical structure and/or interaction control.

169

Page 6: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

In this sense, it is argued that it suffices to define onlytwo action classes, applicable on interface components:(a) activate components; and (ii) cancel activated com-ponents (i.e., deactivate). As it will be discussed, thesetwo actions directly map to the implementation domain(i.e., activation means ‘‘instantiation’’ of software ob-jects, while cancellation means ‘‘destruction’’), thusconsiderably downsizing the problem of modelingadaptation actions.

3.2.6 Structuring dialogue implementationfor adaptivity

The notion of interface component refers to implementedsub-dialogues provided by means of pre-packaged, di-rectly deployable, software entities [37]. Such entitiesincreasingly become the basic building blocks in acomponent-based software assembly process, highlyresembling the hardware design and manufacturingprocess. The need for configurable dialogue componentshas been identified in [8], as a general capability ofinteractive software to visualize some important imple-mentation parameters, through which the flexible fine-tuning of interactive behaviours may be performed atruntime.

However, the analysis in [8] is based on a theoreticalground, and mainly identifies requirements, withoutproposing specific approaches to achieving this type ofdesirable functional behaviour. For instance, the pro-posed distinction among ‘‘scalar’’, ‘‘structured’’ and‘‘higher-order’’ objects does not map to any interfaceengineering practice. Moreover, the definition of adap-tation policies as ‘‘changes’’ at different levels neitherprovides any concrete architectural model, nor revealsany useful implementation patterns. The results of suchtheoretical studies are good for understanding the vari-ous dynamics involved in adaptive interaction; however,they do not provide added-value information for engi-neering adaptive interaction.

It can be concluded from the above that the incor-poration of adaptation capabilities into interactivesoftware is far from trivial and cannot be attainedthrough the existing, traditional approaches to softwarearchitecture. Therefore, there is a genuine requirementfor the definition of a new software architecture that canaccommodate the adaptation-oriented requirements ofunified user interfaces. Such an architectural frameworkis described in the next section.

3.3 Multi-modal interfaces and abstract interactionobjects

Work on multi-modal interaction has emphasized theidentification of models, architectures and tools thatenable the interface to dynamically cater for the presenceof multiple I/O modalities in enabling the user toperform the same task. The relationship between multi-modal interfaces and universally accessible interactions

has been identified quite early, in particular with theintroduction of meta-widgets [4], as a method to sepa-rate the logical aspects of interaction objects from thephysical form. This would enable an interface to employ,during interaction, alternative I/O modalities, so thataccessibility at the lexical level of interaction can beenabled. Similar concepts, emphasising the need forabstract objects, have been expressed in [20], where apossible architectural profile of a toolkit supportingabstract objects is proposed. The notion of virtualobjects [29], supporting open and extensible policies formapping to physical I/O modalities with fine-grainprogramming support, has been proposed together withfull tool support, in the context of the HOMER UIMS[31]; this UIMS introduced the concept of dual inter-faces, in which a single interface is built by utilising,within the same software implementation, multipleinteraction technologies—Windows, Xt/Xaw and Com-onkit [30], taking advantage of control sharing,abstraction and toolkit integration mechanisms.

Although the work on multi-modal interfaces hasbeen a serious advancement towards universally acces-sible interactions, it has been primarily targeted to thelexical level of interaction and the issue of accessibility.When it comes to user- and usage-context orientedadaptation, those approaches do not offer any particulararchitectural and engineering strategy. For instance,there are clearly no answers to critical questions such aswhere the alternative implemented dialogue componentsshould reside, at which point dynamic activation/de-activation is controlled, where decision making is per-formed, where user-oriented information resides and inwhat form, where interface design information is storedand in what form, etc.

3.4 Alternative access systems

The typical technical approach of alternative accesssystems is to ‘‘intervene’’ at the level of the particularinteractive application environment (e.g., MS Windows,or the X Windowing system) in which inaccessibleapplications are deployed, and produce appropriatesoftware and hardware technology so as to make thatenvironment alternatively accessible (i.e, environmentlevel adaptation). In theory, potentially all applicationsrunning through that interactive environment can bemade accessible. The typical architecture of those sys-tems is indicated in Fig. 1. Such systems usually rely onwell-documented and operationally reliable softwareinfrastructures, supporting the effective and efficientextraction of dialogue primitives during user-computerinteraction. Such dynamically extracted dialogue primi-tives are stored in a dynamically maintained so-calledoff-screen model (OSM), that is used to reproduce thedialogue primitives at runtime, to alternative input/output forms, directly supporting user access, byemploying alternative accessible interaction techniquesand devices (e.g., scanning keyboards, cursor routing,

170

Page 7: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

mouse emulators, etc.). Examples of such softwareinfrastructures are the Active Accessibility Technologyby Microsoft, the Java Accessibility Technology byJavaSoft, and the GNOME Accessibility Framework bythe GNOME Foundation.

The main disadvantage of such automatic interfacereproduction methods is the limited interface qualityaccomplished. This is due to the fact that such anautomatic reproduction is merely based on the heuristictransformation of the inaccessible interface design.However, to achieve a high quality of interaction, it isnecessary to drive explicit interface design-, implemen-tation- and evaluation- processes, for the particularapplications, specifically addressing the needs of thetarget user population (with disability). Effectively, lessthan optimal interaction can thus be accomplished,while in some cases there are dialogue artefacts thatcannot be either filtered or reproduced (e.g., directmanipulation rubber-banding dialogues, multi-mediainterfaces, visualisations, etc.). For instance, in thecontext of non-visual interfaces, the need for non-visualuser interfaces to be more than automatically generatedadaptations of visual dialogues has been identified in[29]. Previous work in the domain of alternative accesssystems included early research systems produced by theGUIB Project [17], auditory GUIs [35], and the Mer-cator Project [27], as well as commercial systems likeASAW (Automatic Screen Access for Windows), byMicroTalk, and IN CUBE Enhanced Access (relyingupon speech input and other commercial screen read-ers), by Command Corp, Inc.

3.5 Model-based interfaces

Model-based interface development tools are, in princi-ple, a very promising category of interface tools foruniversally accessible interactions, since they potentiallyincorporate computable user- and design- models, even

though in most previous model-based tools such asUIDE [12] and Humanoid [42], the emphasis was onapplication and dialogue modeling rather than user- andusage-context modeling. Model-based tools, such as, forexample, Mobi-D [28], have tried to incorporate meth-ods for mapping user models to dialogue models, byenabling developers to describe user- and application-model instances, while letting the tool (based onheuristics mapping rules) produce a working interfaceversion. However, the capabilities for universally acces-sible dialogue design and implementation are ratherlimited: on the one hand, the capability for designing andimplementing new artefacts is restricted in comparison tocommercial graphic construction tools (such as VisualBasic), while, on the other hand, all available methodsare bound to GUIs. From the run-time point of view,model-based tools do not incorporate the capability toproduce and assemble alternative interface versions ‘‘on-the-fly’’, based on model instances supplied as an input.In principle, model-based tools provide the means forspecifying the logic of model mapping; i.e., how from onemodel domain (such as a user-model), one can map toanother model domain (such as the dialogue model).

However, the role of user modeling as a run-timedecision factor that can directly affect the dialogue styleselection, even for the same dialogue model context (arun-time variation of the dialogue-model, i.e., poly-morphism), is clearly the most important missingingredient in model-based approaches. Furthermore,from the software-interface engineering point of view,model-based tools do not propose a general-purposedevelopment strategy (one that can be widely appliedwith commonly used programming languages). Instead,they appear as stand-alone non-interoperable researchtools. As a consequence, the main disadvantage ofmodel-based, tools, from a software engineering view-point, is their monolithic nature. It is argued that theemphasis of future work should be shifted towardsmodel-based development allowing the open employ-ment of diverse tools. Each such tool should serve adistinctive purpose in the interface development process,and it should interoperate on the basis of a commonarchitectural structure, specific development roles, andwell-defined protocols for information exchange.

4 Unified user interface development

4.1 The strategy

A unified user interface consists of run-time compo-nents, each with a distinctive role in performing atruntime an interface assembly process, by selecting themost appropriate dialogue patterns from the availableimplemented design space (i.e., the organised collectionof all dialogue artefacts produced during the designphase). A unified user interface does not constitute asingle software system, but becomes a distributedarchitecture consisting of independent inter-communi-

Fig. 1 The typical architecture of alternative access systems

171

Page 8: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

cating components, possibly implemented with differentsoftware methods/tools and residing at different physicallocations. These components cooperate together toperform adaptation according to the individual end-userattributes and the particular usage context. At run-time,the overall adapted interface behaviour is realised bytwo complementary classes of system initiated actions:

a. adaptations driven from initial user- and context-information, acquired without performing interactionmonitoring analysis (i.e., what is ‘‘known’’ beforestarting observing the user or the usage-context); and

b. adaptations decided on the basis of information in-ferred or extracted by performing interaction moni-toring analysis (i.e., what is ‘‘learnt’’ by observing theuser or the usage-context).

The former behaviour is referred to as adaptability(i.e., initial automatic adaptation, performed beforeinitiation of interaction) reflecting the capability of theinterface to proactively and automatically tailor itself tothe attributes of each individual end user. The latterbehaviour is referred to as adaptivity (i.e., continuousautomatic adaptation), and characterizes the capabilityof the interface to cope with the dynamically changing/evolving characteristics of users and usage contexts.Adaptability is crucial to ensure accessibility, since it isessential to provide, before the initiation of interaction,a fully accessible interface instance to each individualend user. Adaptivity can be applied only on accessiblerunning interface instances (i.e., ones with which theuser is capable of performing interaction), since inter-action monitoring is required for the identification ofchanging or emerging decision parameters that maydrive dynamic interface enhancements.

The complementary roles of adaptability and adap-tivity are depicted in Fig. 2, while the key differencesamong these two adaptation methods are illustrated inTable 1. This fundamental distinction is made due to thedifferent run-time control requirements between those

two key classes of adaptation behaviours, requiringdifferent software engineering policies.

4.2 The unified user interface software architecture

In this section, the detailed run-time architecture forunified user interfaces will be discussed, in compliancewith the definitions of architecture provided by theObject Management Group (OMG) [25], and [19],according to which an architecture should supply anorganisation of components, a description of func-tional roles, detailed communication protocols orappropriate application programming interfaces(APIs), and key component implementation issues.Following the presentation of the architecture withinthe next section, concrete examples from the AVANTIbrowser will be illustrated, by describing: (a) how theirimplementation is split into the different architecturalcomponents; (b) what type of implementationapproach has been taken in each different component;and (c) how the various components communicate (i.e,control flow) to accomplish the appropriate adaptationbehaviour.

Firstly, an outline of the adopted architectural com-ponents will provide information regarding: (a) thefunctional role; (b) the run-time behaviour; (c) theencapsulated context; and (d) the implementationmethod. The components of the unified user interfacearchitecture are (see Fig. 3):

– The Dialogue Patterns Component (DPC);– The Decision Making Component (DMC);– The User Information Server (UIS);– The Context Parameters Server (CPS).

Fig. 2 The complementary roles of adaptability and adaptivity

Table 1 Key differences between adaptability and adaptivity in thecontext of unified user interfaces

172

Page 9: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

4.2.1 The UIS

4.2.1.1 The functional role The functional role suppliesuser attribute values: (i) known off-line, without per-forming interaction monitoring analysis (e.g., motor/sensory abilities, age, nationality, etc.); and (ii) detectedon-line, from real-time interaction-monitoring analysis(e.g., fatigue, loss of orientation, inability to perform thetask, interaction preferences, etc.)

4.2.1.2 The run-time behaviour Run-time behavior playsa two-fold role: (i) it constitutes a server that maintainsand provides information regarding individual userprofiles; and (ii) encompasses user representationschemes, knowledge processing components and designinformation, to dynamically detect user properties orcharacteristics.

4.2.1.3 The encapsulated content This component mayneed to employ alternative ways of representing user-oriented information., a repository of user profiles servesas a central database of individual user information (i.e.,the registry). In Fig. 4, the notion of a profile structureand a profile instance, reflecting a typical list of typedattributes is shown; this model, though quite simple, isproved in real practice to be very powerful and flexible(can be stored in a database, thus turning the profile

manager to a remotely accessed database). Additionally,more sophisticated user representation and modelingmethods can be also employed, including support forstereotypes of particular user categories. In case dy-namic user attribute detection is to be supported, thecontent may include dynamically collected interactionmonitoring information, design information andknowledge processing components, as it is discussed inthe implementation techniques. Systems such as BGP-MS [21], PROTUM [44], or USE-IT [2] encompasssimilar techniques for such intelligent processing.

4.2.1.4 Implementation From a knowledge representa-tion point of view, static or pre-existing user knowledgemay be encoded in any appropriate form, depending onthe type of information the user information servershould feed to the decision making process. Moreover,additional knowledge-based components may be em-ployed for processing retrieved user profiles, drawingassumptions about the user, or updating the originaluser profiles. In Fig. 5, the internal architecture of theUIS employed in the AVANTI browser is presented. Itshould be noted that the first version of the AVANTIbrowser produced in the context of the AVANTI Projectemployed BGP-MS [22] for the role of the UIS. Theprofile manager has been implemented as a database ofprofiles. The two other sub-systems, (i.e., the monitoringmanager, the modeling and the inference) are neededonly in case dynamic user attribute detection is required.

The interaction monitoring history has been imple-mented as a time-stamped list of monitoring events (thestructure of monitoring events is described in the anal-ysis of communication semantics) annotated with simpledialogue design context information (i.e., just the sub-task name). In the user models, all the types ofdynamically detected user attributes have been identified(e.g., inability to perform a task, loss of orienta-tion—those were actually the two dynamically detect-able attributes required by the design in the AVANTIbrowser). Each such attribute is associated with itscorresponding behavioural action patterns. In the spe-cific case, the representation of the behavioural patternshas been implemented together with the pattern-matching component, by means of state automata. Forinstance, one heuristic pattern to detect loss of orienta-tion has been defined as ‘‘the user moves the cursor insidethe Web-page display area, without selecting a link, formore than N seconds’’. The state automaton startsrecording mouse moves in the page area, increasingappropriately a weight variable and a probability value,based on incoming monitored mouse moves, while fi-nally triggering detection when no intermediate activityis successfully performed by the user. This worked finefrom an implementation point of view. However, allsuch heuristic assumptions had to be extensively verifiedwith real users so as to assert the relationship betweenthe observable user behaviour and the particular in-ferred user attributes. This is a common issue in all

Fig. 3 The components of the unified user interface architecture

Fig. 4 The notion of a profile structure and a profile instance

173

Page 10: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

adaptive systems that employ heuristics for detectinguser attributes at runtime, practically meaning that thevalidity of the ‘‘assumptions inferred’’ is dependent onthe appropriateness of the specific user action patternschosen.

4.2.2 The CPS

4.2.2.1 The functional role The purpose of this compo-nent is to supply context attribute values (machine andenvironment) of two types: (i) (potentially) invariant,meaning unlikely to change during interaction, e.g.,peripheral equipment; and (ii) variant, dynamicallychanging during interaction (e.g., due to environmentnoise, or the failure of particular equipment, etc.). Thiscomponent is not intended to support device indepen-dence, but to provide device awareness. Its purpose is toenable the DMC to select those interaction patterns,which, apart from fitting the particular end-user attri-butes, are also appropriate for the type of equipmentavailable on the end-user machine.

4.2.2.2 The run-time behaviour The usage-context attri-bute values are communicated to the DMC before theinitiation of interaction. Additionally, during interac-tion, some dynamically changing usage-context param-eters may also be fed to the DMC for decisionsregarding adaptive behaviour. For instance, let us as-sume that the initial decision for selecting feedback leadsto the use of audio effects. Then, the dynamic detectionof an increase in environmental noise may result in arun-time decision to switch to visual feedback (theunderlying assumption being that such a decision doesnot conflict with other constraints).

4.2.2.3 The encapsulated content This componentencompasses a listing of the various invariant propertiesand equipment of the target machine (e.g., hand-held

binary switches, a speech synthesiser for English, a highresolution display (mode 16bits, 1024·768), a noisyenvironment, etc.). In this context, the more informationregarding the characteristics of the target environmentand machine is encapsulated, especially concerning I/Odevices, the better adaptation can be achieved (infor-mation initially appearing redundant is likely to be usedin future adaptation-oriented extensions).

4.2.2.4 The implementation The registry of environmentproperties and available equipment can be implementedeasily as a profile manager in the form of a database.Such information will be communicated to the DMC asattribute/value pairs. However, if usage-context infor-mation is to be dynamically collected, such as environ-ment noise, or reduction of network bandwidth, theinstallation of proper hardware sensors or softwaremonitors becomes mandatory.

4.2.3 The DMC

4.2.3.1 The functional role The role of this component isto decide, at runtime, the necessary adaptability andadaptivity actions, and to subsequently communicatethose to the DPC (the latter being responsible forapplying adaptation-oriented decisions).

4.2.3.2 The run-time behaviour To decide adaptation,this component performs a kind of rule-based knowl-edge processing, so as to match end-user- and usage-context attribute values to the corresponding dialogueartefacts, for all the various dialogue contexts.

4.2.3.3 The encapsulated content This module encom-passes the logic for deciding the necessary adaptationactions, on the basis of the user- and context- attributevalues, received from the UIS and the CPS, respectively.

Fig. 5 The internal architectureof the UIS employed in theAVANTI browser

174

Page 11: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

Such attribute values will be supplied to the DMC, priorto the initiation of interaction within different dialoguecontexts (i.e., initial values, resulting in initial interfaceadaptation), as well as during interaction (i.e., changesin particular values, or detection of new values, resultingin dynamic interface adaptations).

In the proposed approach, the encapsulated adapta-tion logic should reflect pre-defined decisions during thedesign stage. In other words, the inference mechanismsemploy well-defined decision patterns that have beenvalidated during the design phase of the various alter-native dialogue artefacts. In practice, this approachleads to a rule-based implementation, in which embed-ded knowledge reflects adaptation rules that have beenalready constructed and documented as part of the de-sign stage. This decision-making policy is motivated bythe assumption that if a human designer cannot decideupon adaptation for a dialogue context, given a partic-ular end-user and usage-context, then a valid adaptationdecision can not be taken by a knowledge-based systemat runtime. Later in this paper, while discussing someimplementation details of the AVANTI browser, specificexcerpts from the rule base of the decision engine will bediscussed.

4.2.3.4 Implementation The first remark regarding theimplementation of decision making concerns theapparent ‘‘awareness’’ regarding: (a) the various alter-native dialogue artefacts (how they are named—e.g.,virtual keyboard, for which dialogue context they havebeen designed—e.g., http address text field); (b) user- andusage-context attribute names, and their respective valuedomains (e.g., attribute ‘‘age’’, being integer in range5...110).

The second issue concerns the input to the decisionprocess, being individual user- and usage-context attri-bute values. Those are received at run-time from boththe UIS and the CIS, either by request (i.e., the DMCtakes the initiative to request the end-user and usage-context profile at start-up to draw adaptability deci-sions), or by notification (i.e., when the UIS drawsassumptions regarding dynamic user attributes, or whenthe CPS identifies dynamic context attributes).

The third issue concerns the format and structure ofknowledge representation. In all developments that wehave carried out, it has been proven that a rule-basedlogic implementation is practically adequate. Moreover,all interface designers engaged in the design processemphasised that this type of knowledge representationapproach is far closer to their own way of rule-basedthinking in deciding adaptation. This remark has led toexcluding, at a very early stage, other possible ap-proaches, such as heuristic pattern matching, weightingfactor matrices, or probabilistic decision networks.

The final issue concerned the representation of theoutcomes of the decision process in a form suitable forbeing communicated and easily interpreted by the DPC.In this context, it has been practically proven that two

categories of dialogue control actions suffice to com-municate adaptation decisions: (i) activation of specificdialogue components; and (ii) cancellation of previouslyactivated dialogue components.

These two categories of adaptation actions providethe expressive power necessary for communicating thedialogue component manipulation requirements thatrealise both adaptability and adaptivity (see Table 2).(In Table 3, the decision-making logic is defined). Sub-stitution is modelled by a message containing a series ofcancellation actions (i.e., the dialogue components to besubstituted), followed by the necessary number of acti-vation actions (i.e., which dialogue components to acti-vate in place of the cancelled components). Therefore,the transmission of those commands in a single message(i.e., cancellation actions followed by activation actions)is to be used for implementing a substitution action. Theneed to send in one message packaged informationregarding the cancelled component, together with thecomponents which take its place, emerges when theimplemented interface requires knowledge of all (orsome) of the newly created components during interac-tion. For instance, if the new components include acontainer (e.g., a window object) with various embeddedobjects, and if upon the creation of the containerinformation on the number and type of the particularcontained objects is needed, it is necessary to ensure thatall the relevant information (i.e., all engaged compo-nents) is received as a single message. It should be notedthat, since each activation/cancellation command alwayscarries its target UI component identification (seeTable 4), it is possible to engage in substitution requestscomponents that are not necessarily part of the samephysical dialogue artefact. Also, the decision to applysubstitution is the responsibility of the DMC.

Table 2 The user interface component manipulation requirements(left), for adaptability and adaptivity, and their expression viacancellation/activation adaptation actions

175

Page 12: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

One issue regarding the expressive power of activationand cancellation decisions categories concerns the waydynamic interface updates (i.e., changing style orappearance, without closing or opening interface objects)can be effectively addressed. The answer to this questionis related to the specific connotation attributed to thenotion of a dialogue component. A dialogue componentmay not only implement physical dialogue context, suchas a window and embedded objects, but may concern theactivation of dialogue control policies, or be realised as aparticular sequence of interface manipulation actions. Inthis sense, the interface updates are to be collected in anappropriate dialogue implementation component (e.g., aprogram function, an object class, a library module) to besubsequently activated (i.e., called) when a correspond-ing activation message is received. This is the specificapproach taken in the AVANTI browser, which, from asoftware engineering point of view, enabled a betterorganisation of the implementation modules aroundcommon design roles.

4.2.4 The DPC

4.2.4.1 The functional role This component is responsi-ble for supplying the software implementation of all thedialogue artefacts that have been identified in the design

process. Such implemented components may vary fromdialogue artefacts that are common across different user-and usage-context attribute values (i.e., no adaptationneeded), to dialogue artefacts that will map to individualattribute values (i.e., alternative designs have beennecessitated for adapted interaction). Additionally, as ithas been previously mentioned, apart from implement-ing physical context, various components may imple-ment dialogue-sequencing control, perform interfacemanipulation actions, maintain shared dialogue statelogic, or apply interaction monitoring.

4.2.4.2 The run-time behaviour The DPC should becapable of applying at run-time, activation or cancella-tion decisions originated from the DMC. Additionally,interaction monitoring components may need to bedynamically installed/uninstalled on particular physicaldialogue components. This behaviour will serve the run-time interaction monitoring control requests from theUIS, so as to provide continuous interaction monitoringinformation back to the UIS for further intelligentprocessing.

4.2.4.3 The encapsulated content The DPC either em-beds the software implementation of the variousdialogue components, or is aware of where those

Table 3 Excerpts from thedecision making logicspecification, reflecting an‘‘if...then...else’’ rule format.The rules are organisedaccording to the particulardialogue context (e.g. user task/display view/feedback). Atruntime, individual userattributes are made availablethrough user. prefix, whileusage-context attributesthrough the context. prefix; thedecision on activation/cancellation of the appropriatedialogue component(-s) isdefined by activate or cancelrules, while re-evaluation witha new dialogue context istriggered by a dialoguestatement. All decisions madewithin an evaluation round areposted as a single message

176

Page 13: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

components physically reside, by employing dynamicquery, retrieval and activation methods. The former isthe typical method that can be used if the softwareimplementation of the components is provided locally,by means of software modules, libraries or resident in-stalled components. Usually, most of the implementa-tion is to be carried out in a single programminglanguage. The latter approach reflects the scenario inwhich distinct components are implemented on top ofcomponent-ware technologies, usually residing in local/remote component repositories (also called registries ordirectories), enabling re-use with dynamic deployment.

In the development of the AVANTI browser, acombination of these two approaches has been em-ployed, by implementing most of the common dialoguecomponents into a single language (actually in C++,by employing all the necessary toolkit libraries), whileimplementing some of the alternative dialogue artefacts

as independent Active X components that were locatedand employed on-the-fly. The experience from the soft-ware development of the AVANTI-browser has provedthat: (a) the single language paradigm makes it far easierto perform quick implementation and testing of inter-face components; (b) the component-based approachlargely promotes the binary format re-use of imple-mented dialogue components, while offering far bettersupport for dynamic interface assembly, which is thecentral engineering concept of unified user interfaces(this issue will be elaborated upon in the discussionsection of the paper).

4.2.4.4 The implementation The micro-architecture ofthe DPC internally employed in the AVANTI-browser,as outlined in Fig. 6, emphasises internal organisation toenable extensibility and evolution by adding new

Table 4 Communicatedmessages between the UIS andthe DMC

177

Page 14: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

dialogue components. Additionally, it reflects the keyrole of the DPC in applying adaptation decisions. Theinternal components are:

– The activation dispatcher, which ‘‘locates’’ the sourceof implementation of a component (or simply uses itsAPI, if it is a locally used library), to activate it. In thissense, activation may imply a typical instantiation inOOP terms, calling of particular service functions, oractivating a remotely located object. After a compo-nent is activated, if cancellation is to be applied to thiscomponent, it is further registered in a local registry ofactivated components. In this registry, the indexingparameters used are the particular dialogue context(e.g., a sub-task, for instance ‘‘http address field’’),and the artefact design descriptor (i.e., a uniquedescriptive name provided during the design phase–for instance, ‘‘virtual keyboard’’). For some categoriesof components, cancellation may not be defined dur-ing the design process, meaning there is no reason toregister those at run-time for possible future cancel-lation (e.g., components with a temporal nature, thatonly perform some interface update activities).

– The cancellation dispatcher, which locates a compo-nent based on its indexing parameters and ‘‘calls’’ forcancellation. This may imply a typical destruction inOOP terms, calling internally particular service func-tions that may typically perform the unobtrusiveremoval of the physical view of the cancelled com-ponent, or the release of a remote object instance.After cancellation is performed, the componentinstance is removed from the local registry.

– The monitoring manager, which plays a two-fold role:(a) it applies monitoring control requests originatedfrom the UIS by, firstly, locating the correspondingdialogue components, and, secondly, requesting theinstallation (or uninstall) of the particular monitoringpolicy (this requires implementation additions in dia-logue components, for performing interaction moni-toring and for activating or deactivating theinteraction monitoring behaviour); and (b) it receivesinteraction monitoring notifications from dialoguecomponents and posts those to the UIS.

– The communication manager, which is responsible fordispatching incoming communication (activation,

cancellation and monitoring control) and postingoutgoing communication (monitoring data, and initialadaptation requests). One might observe that there isalso an explicit link between the dialogue componentsand the communication manager. This reflects theinitiation of interaction in which the dialogue controllogic (residing within dialogue components) requestsiteratively the application of decision-making (fromthe DMC). Such requests will need to be posted for allcases involving dialogue component alternatives forwhich adapted selection has to be appropriately per-formed.

The dialogue components, in which the real imple-mentation of physical dialogues, dialogue control logic,and an interaction monitoring method, are typicallyencompassed. In practice, it is hard to accomplish iso-lated implementation of the dialogue artefacts as inde-pendent ‘‘black boxes’’, that can be combined andassembled on-the-fly by independent controlling soft-ware. In most designs, it is common that physical dia-logue artefacts are contained inside other physicalartefacts. In this case, if there are alternative versions ofthe embedded artefacts, it turns out that to make con-tainers fully orthogonal and independent with respect tothe contained, one has to support intensive parameteri-sation and pay a ‘‘heavier’’ implementation overhead.However, the gains are that the implementation ofcontained artefacts can be independently re-used acrossdifferent applications, while in the more ‘‘monolithic’’approach, re-use requires deployment of the containercode (and recursively, of its container too, if it is con-tained as well). A later section will discuss this issue infurther details.

4.2.5 Adaptability and adaptivity cycles

The completion of an adaptation cycle, being eitheradaptability or adaptivity, is realized in a number ofdistributed processing stages performed by the variouscomponents of the unified architecture. During thesestages, the components communicate with each other,requesting or delivering specific pieces of information.Figure 7 outlines the processing steps for performingboth the initial adaptability cycle (to be executed onlyonce), as well as the two types of adaptivity cycles (i.e.,one starting from ‘‘dynamic context attribute values’’,and another starting from ‘‘interaction monitoringcontrol’’). Local actions indicated within components (ineach of the four columns) are either outgoing messages,shown in bold typeface, or necessary internal processing,illustrated via shaded rectangles.

4.3 Inter-component communication semantics

This section presents the communication protocolamong the various components in a form emphasisingthe rules that govern the exchange of information

Fig. 6 The micro-architecture of the DPC internally employed inthe AVANTI browser

178

Page 15: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

among various communicating parties, as opposed to astrict message syntax description . Hence, the primaryfocus will be on the semantics of communication,regarding: (a) the type of information communicated;(b) the content it conveys; and (c) the usefulness of thecommunicated information at the recipient componentside.

In the unified software architecture, there are fourdistinct bi-directional communication channels (outlinedin Fig. 8), each engaging a pair of communicatingcomponents. For instance, one such channel concernsthe communication between the UIS and the DMC.Each channel practically defines two protocol categories,one for each direction of the communication link, e.g.,UIS�DMC (i.e., the type of messages sent from UIS toDMC) and DMC�UIS (i.e., the type of messages sentfrom DMC to UIS). The description of the protocols foreach of the four communication channels follows.

4.3.1 Communication between the UIS and the DMC

In this communication channel, there are two commu-nication rounds: (a) prior to the initiation of interaction,where the DMC requests the user profile from the UIS,and the latter replies directly with the correspondingprofile as a list of attribute/value pairs; and (b) after theinitiation of the interaction, each time the UIS detectssome dynamic user attribute values (on the basis ofinteraction monitoring), it communicates those valuesimmediately to the DMC. In Table 4, the syntax of themessages communicated between the UIS and the DMCis defined and simple examples are provided.

4.3.2 Communication between the UIS and the DPC

The communication among these two components aimsto enable the UIS to collect interaction monitoringinformation, as well as to control the type of monitoringto be performed. The UIS may request monitoring atthree different levels: (a) the task (i.e., when initiated orcompleted); (b) the method for interaction objects (i.e.,which logical object action has been accomplished—like‘‘pressing’’ a ‘‘button’’ object); and (c) the input event(i.e., a specific device event—like ‘‘mouse move’’ or ‘‘keypress’’).

Fig. 7 The processing steps forperforming both the initialadaptability cycle, as well as thetwo types of adaptivity cycles

Fig. 8 Four distinct bi-directional communication channels

179

Page 16: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

In response to monitoring control messages, the DPCwill have to: (a) activate or cancel the appropriateinteraction monitoring software modules; and (b) con-tinuously export monitoring data, according to themonitoring levels requested, back to the User Informa-tion Server (initially, no monitoring modules are acti-vated by the DPC). In Table 5, the syntax of messagescommunicated between the UIS and the DPC is defined,and simple examples are provided.

4.3.3 Communication between the DMC and the DPC

As it has been mentioned, inside the DPC, alternativeimplemented dialogue artefacts are associated to theirrespective dialogue context (e.g., their sub-tasks). Aspart of the design process, dialogue artefacts have beenassigned an indicative name, unique across the rest ofalternatives for the same dialogue context. Each suchimplemented dialogue artefacts, associated to particular

dialogue contexts, will be referred to as a style. Styles ofa given dialogue context can thus be identified by ref-erence to their designated name.

At start-up, before initiation of interaction, theinitial interface assembly process proceeds as follows:the top-level dialogue component requests for itsassociated user-task the names of the styles (i.e.,implemented dialogue components) which have to beactivated, so as to realise the adaptability behaviour.Recursively, each container dialogue componentrepeats the same process for every embedded compo-nent. For each received request, the DMC triggersthe adaptability cycle and responds with properactivation messages. Additionally, after the initia-tion of interaction, the DMC may communicate dy-namic style activation/cancellation messages to theDPC, as a consequence of the dynamic detection ofuser- or usage-context attribute values (originated fromthe UIS). Table 6 defines the syntax of messages

Table 5 Communicatedmessages between the UIS andthe DPC

180

Page 17: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

communicated between the DMC and the DPC, andshows simple examples.

4.3.4 Communication between the DMCand the CPS

The communication between these two components isvery simple. The DMC requests the various contextparameter values (i.e., the usage-context profile), and theCPS responds accordingly. During interaction, dynamicupdates on certain context property values are to becommunicated to the DMC for further processing (i.e.,possibly new inferences will be made). Table 7 definesthe classes of messages communicated between theDMC and the CPS, and shows simple examples.

5 Development highlights

In this section, selected scenarios from the developmentof the AVANTI browser are discussed, focusing on thetechnical challenges related to different components ofthe implemented architecture. As it will be shown, insome scenarios, the implementation of the dialoguecomponents has been comparatively the most demand-ing task, while for others scenarios, the logic for dy-namic user attribute detection required particularattention. The AVANTI browser has been subject to

intensive usability evaluation with end users, throughwhich initial design scenarios have been updated, lead-ing to the final validated unified user interface design.The emphasis of the evaluation process has been onverifying the added value of the adaptation-specificartefacts. The overall AVANTI system design, imple-mentation and evaluation, in the form of a case study forunified user interface development, is discussed in detailin [38].

In Fig. 9, three instances of the AVANTI-browserinterface are depicted, demonstrating adaptation basedon the characteristics of the user and the usage context.Specifically, Fig. 9a presents a simplified instance of thebrowser interface intended for use by a user ‘‘unfamiliarwith Web browsing’’. Note the ‘‘basic’’ web-access userinterface with which the user is presented, as well as thefact that links are presented as buttons, increasing theiraffordance (at least in terms of functionality) for usersfamiliar with windowing applications in general. Thesecond instance, Fig. 9b, presents an interface instanceto be used in the context of a ‘‘kiosk installation’’. Notethat the typical windowing paradigm is abandonedand replaced by a simpler, content-oriented interface(even scrollbars, for example, have been replaced by‘‘scroll buttons’’, which perform the same function asscrollbars, but have different presentation and behavio-ural attributes—e.g., they ‘‘hide’’ themselves if they arenot selectable). Furthermore, ‘‘usage-context sensitive’’

Table 6 Communicatedmessages between the DMCand the DPC

181

Page 18: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

buttons in the interface’s ‘‘toolbar’’ are supported (i.e.,adding the ‘‘exit’’ button in ‘‘kiosk’’ usage context, sincethe typical ‘‘File’’ menu providing the ‘‘exit’’ option indesktop mode is not available in ‘‘kiosk’’ mode). Finally,in the third instance, Fig. 9c, the interface has beenadapted for an ‘‘experienced Web user’’. Note theadditional functionality that is available to the user (e.g.,a pane where the user can access an overview of thedocument itself, or of the links contained therein, an editfield for entering the URLs of local or remote HTMLdocuments).

The scenario of Fig. 9 introduced challenges mainlyfor the organisation of the dialogue components. Morespecifically, in Fig. 10, the organisation structure of thedialogue components is outlined. The management ofalternative components has been handled in a way di-rectly reflecting the design logic, through appropriateOOP abstraction mechanisms (the AVANTI browserhas been implemented in C++). For instance (seeFig. 10), the toolbar has been implemented as a con-tainer component separating its contained constituentsthat could be subject to dynamic presence (e.g., the‘‘expert commands’’ component that is to be condi-tionally activated, has been purposely defined as a sep-arate component, to enable independent activationcontrol). Additionally, the toolbar itself has been definedas an abstract container, meaning it can be physicallyrealised with alternative container styles (e.g. a managedwindow, a toolbar, a simple rectangular area, an audi-

tory container). Following a similar approach, the var-ious contained constituents (like the ‘‘expertcommands’’), when supporting alternative realisations(e.g., ‘‘kiosk’’ or ‘‘desktop’’ versions, each further sup-plied with ‘‘GUI’’ or ‘‘non-visual’’ versions), were alsostructured as abstract classes, enabling all the variousalternative versions to be implemented as concreteinstantiations of the abstract class.

Following such an implementation approach, con-tainer components manipulate constituents throughabstract object instances, that, for each contained con-stituent abstract class A are constructed during runtimeas follows: assume A has alternative implementationsA1,...,An; then, an Ai instance will be created followingthe adaptation decision received from the DMC. Thisdistinction between the abstract component API and itsinternal implementation has been proved to be a verysimple, still powerful, method for handling the compo-nent organization needed in the AVANTI browser.

In Fig. 11, the activation of dialogue componentsthat provide switch-based scanning [34] for motor-impaired user access is depicted. The implementation ofthose dialogue techniques did not follow the componentabstraction pattern as described before. From the initialdesign stages, it became evident that the scanning dia-logue patterns were needed for all GUI dialogue com-ponents, and thereafter, for every constituent GUIobject of those components. Hence, from a design pointof view, the diversification originally emerges at the level

Table 7 . Communicatedmessages between the DMCand the CPS

182

Page 19: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

of interaction objects, rather than at higher-level dia-logue components. Based on this remark, the pattern hasbeen applied at a lower, but more generic level, asindicated in the lower-part of Fig. 10. The GUI objectlibrary has been supplied with two alternative imple-mentations: (a) the original Windows intact implemen-tation; and (b) an augmented version, supportingswitch-based dialogue control (such as hierarchicalscanning, virtual keyboard, and automatic window tool-bars, as shown in Fig. 11).

The previous abstraction technique worked well forembedding scanning in GUI dialogue objects. Then, wetried to re-apply the same technique for the HAWKnon-visual toolkit, a software library specifically de-signed and implemented to enable the development of

non-visual interactive applications [33]. It turned outthat further abstraction by encompassing the augmentedGUI toolkit and the HAWK toolkit, within a singleabstract toolkit, could not work in practice for ademanding application, such as a Web browser. Thereason was twofold: (i) the programmers of the GUIdialogues required fine-grain control on all visual attri-butes and methods, thus putting a GUI bias on theabstract toolkit; and (b) the programmers of the non-visual dialogues required similarly fine-grain control onall non-visual interaction techniques, inherently puttinga ‘‘HAWK-specific’’ bias on the abstract toolkit. As aresult, it was decided that abstraction should remain atthe level of dialogue components, following the core ofthe unified user interface development approach, rather

Fig. 9. Three instances of theAVANTI browser interface

183

Page 20: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

Fig. 10 The organisationstructure of the dialoguecomponents

Fig. 11 The activation ofdialogue components thatprovide switch-based scanningfor motor-impaired user access

184

Page 21: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

than at the level of lower-level dialogue primitives likeinteraction objects, as it had been applied in previouslyreported work, for example in dual interface develop-ment [29]. The strategies for unification that can beemployed at the level of interaction objects, namelyintegration, augmentation, expansion and abstraction,when developing for diverse users and platforms, aredescribed in detail in [32].

In Fig. 12, the activation of alternative componentsfor history-task guidance is shown. In this case, thecomponent organisation scheme previously describedhas been employed. However, the most challenging issuein this scenario concerned the dynamic identification ofthe need to provide guidance, by detecting a potentialconfusion or inability to perform the task triggering the

provision of ‘‘pop-up’’ guidance. This required particu-lar focus in the implementation of the UIS, with heu-ristic algorithms based on state automata for therecognition of particular user situations (like confusion),while it inherently necessitated interaction-monitoringsoftware to be integrated within all the dialogue com-ponents.

5.1 The DMSL

In Table 3, the decision-making logic is defined indecision making specification language (DMSL), whichis a compact language that has been specifically designed

Fig. 12 The activation ofalternative components forhistory-task guidance

185

Page 22: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

and implemented for this purpose. The characteristics ofthis approach are as follows:

– The decision logic is organised in ‘‘if...then...else’’blocks, and each block is associated to a particulardialogue context. During run-time, the DMC is al-ways asked (by the DPC) to carry out decision makingfor specific dialogue contexts; hence, this approachenables the localisation and splitting of the decisionmaking logic, significantly boosting run-time perfor-mance (the corresponding ‘‘logic’’ blocks are identi-fied through a hash-table).

– The individual end-user and usage-context profilesare made syntactically available through the user.and context. prefixes, while attribute values aredefined as matching quoted strings (there are alsobuilt-in functions for string manipulation, andnumeric context extraction including range checks).The use of quoted strings is necessary to enablearbitrary run-time interpreted attribute values to bedefined.

– There are only three primitive statements: (a) dialogue,which initiates evaluation for the rule block corre-sponding to dialogue context value supplied (inTable 3, the statement dialogue ‘‘kiosk toolbar expert’’initiates, after the particular block is evaluated, theevaluation of the ‘‘kiosk toolbar expert’’ block); (b)activate, which adds the specified string value to theset of activation decisions to be posted (when theoverall evaluation phase is completed) to the DPC. Itshould be noted that the activation decision with va-lue ‘‘user profile error’’ is to be interpreted by theDPC to designate, for example, an incomplete userprofile (i.e., it is up to the interface developer tohandle such a case—the decision making logic simplyposts a decision indicating the type of the error); and(c) cancel, which, similarly to activate, adds the spec-ified string value to the list of cancellation decisions tobe posted to the DPC.

– The rules are compiled in a tabular representation thatis executed at runtime. This representation engagessimple expression evaluation trees for the conditionalexpressions, leading finally to the execution of thebasic activate, cancel and dialogue statements. Addi-tionally, the execution engine performs cycle elimi-nation by simply recording the dialogue contexts ineach evaluation sequence, and disabling a dialoguecontext to be evaluated twice.

– The representation as ‘‘if...then...else’’ clauses limitsthe chances for more sophisticated decision making.Actually, the initial implementation of the decisionlogic in the AVANTI browser was hard-coded inC++, while it quickly became evident that this logicreflected simple rule patterns. DMSL is a more recentdevelopment in the context of decision-making sup-port for unified user interface development, support-ing embedding of decision-making logic in a waydirectly editable by designers. This also enables dif-ferent designers to contribute to different dialogue

contexts, due to the localization ability of DMSLdecision logic according to dialogue contexts.

The employment of string encoding for the identifi-cation of the dialogue artefacts to be posted to the DPC,via activation or cancellation messages, is a purposefuldecision, because it enables additional information to beembedded within the message content, and to be sub-sequently interpreted dynamically by the correspondingdialogue components of the DPC. For instance, assumethat display size and screen resolution of the terminal isto be used as a parameter to dynamically select alter-native ways of layout organisation for particular dia-logue boxes, e.g., for ‘‘file management’’—in short ‘‘fm’’.In this case, from an interface design point of view, thereare two possible policies: (a) to enumerate alternativedesigns (e.g., ‘‘small fm’’, ‘‘medium fm’’, ‘‘large fm’’),corresponding to discrete values of the display size (e.g.,‘‘small’’, ‘‘medium’’, ‘‘large’’); and (b) to require thedialogue box implementation to analogically computelayout based on display size (e.g., width and height)information.

While the former case requires three alternativeartefacts with three distinct identifiers, the latter requiresa single artefact, which, however, necessitates the displaysize information. To address such cases, the DPC arte-fact indexing policy may universally employ stringidentifiers with the internal syntactic form ‘‘<stringid>: <extra parameters>’’. Hence, in the currentexample, the string content would actually have the form‘‘fm:<width, height>’’, with run-time examples such as‘‘fm:1024,768’’ or ‘‘fm:200,100’’. In this case, the artefactprogrammer would need to be aware of this formalstructure, and would be responsible for parsing andextracting the run-time parameters, and subsequentlyusing those parameters as an input to the dynamic lay-out calculation algorithm. The technique of employingstring representations for messages that can encompassdynamically interpretable request identification andparameters is employed in dynamically extensible serv-ers. The latter may evolve with time and provide newtypes of services to clients, without however altering thebasic remote access protocol.

6 Discussion

In this section, development situations in which unifieduser interface development can encounter difficulties areidentified and discussed. Subsequently, the particularsoftware engineering focus of unified user interfacedevelopment on the organisation and re-use of dialoguecomponents based on their design role is explored inrelation to existing adaptive interface development pro-posals. The particular run-time behaviour of unified userinterfaces in virtually assembling an appropriate inter-face instance out of a pool of dialogue components, givena particular end-user and usage-context, is considered inview of existing methods for individualised interaction.

186

Page 23: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

Finally, some lower-level barriers practically experiencedin implementing the various components are reported.

6.1 Where unified user interface developmentmay not be appropriate

Unified user interface development aims to addressinterface engineering issues emerging: (a) when varia-tions of the end-user attributes impose different inter-action policies ‘‘from the beginning’’, i.e., when theinitial interface is provided to the user, or ‘‘in the mid-dle’’, i.e., when interaction is already taking place; and(b) similarly, when variations on the usage-contextimpose interaction-policy differentiation requirements.However, not all such situations can be effectivelytackled through unified user interface development.

For instance, if for a specific user category theinterface design should reflect severe differentiationsregarding the overall task structure, as well as thephysical design, practically limiting the possibilities forabstraction and unification, then it may be more cost-effective to provide a customised interface design andimplementation. Such scenarios have emerged in par-ticular in the context of efforts targeted to providingapplications for users with cognitive impairments andlearning difficulties, or for instance when designingapplications for children [15]. During the early stages ofthe design process, it quickly became evident that theinterface design itself was very much customised,regarding both the dialogue elements (i.e., instead ofusing typical windowing objects, cartoon-based repre-sentations had to be employed), and the overall dialoguestructure (simple modal dialogues, with intensivefriendly help and increased error tolerance). Moreover,all such designed dialogue components were onlyappropriate in the context of the specific design, prac-tically excluding the chances for further re-use or com-bination with other dialogues not designed for thisspecific user group. In this case, the embedding of suchcustomised designs into a unified-development processmay introduce additional implementation complexitywithout particular interaction-quality benefits. As aconsequence, if the overall design is so customised and‘‘closed’’ that there are practically no possibilities tore-use or combine dialogue components for other user-or usage-context attribute values, then the unified userinterface development approach should be avoided.

A similar situation also emerged in the early stages ofdeveloping applications for specific computing plat-forms, when no particular emphasis was given inaddressing user diversity. For instance, the initialdevelopment of applications for mobile phones engagedradically different design elements with respect to thosetypically occurring in desktop applications. Addition-ally, early phones merely provided very primitive inter-action facilities that largely restricted the chances ofproducing embedded applications that could address theneeds of different user groups. In such cases, the target

platform is considered to be virtually invariant, implyingthat the design stage assumes a specific I/O profile. Atthe same time, software vendors, when faced with suchrestricted interface development tools, are naturally notinterested in investing to support user-adapted behav-iour. Consequently, if user- and usage-context diversityis not a priority to be addressed, there is no need toapply unified user interface development. Moreover,until recently, the structure of dialogues in typical mo-bile-phone applications has reflected a menu-based stylewith modal dialogues. Additionally, the correspondingsoftware libraries employed for interface developmenthave been strictly platform-dependent, enabling no fur-ther re-use or combination with other dialogue compo-nents beyond their specific platform.

However, following the recent developments in thecontext of applications for mobile phones, moresophisticated development tools have become available,offering facilities that are approaching those for typicalPDAs (e.g., Java 2.0 Micro Edition/J2ME by JavaSoft,and Binary Runtime Environment for the Wireless/BREW, by QualComm). Additionally, the advancedfacilities for digital audio playback, voice recognition,and speech synthesis, together with the enhancedgraphics display elements (GUI controls, high resolu-tion, and a run-time configurable colour palette) offerthe necessary ingredients to develop applications fordiverse user attributes, including those people with dis-abilities, elderly people, children, etc., in the form ofunified interactive applications.

6.2 An emphasis on design-oriented dialoguecomponent re-use

Generally, practical support for re-use is reflected by twokey factors: (a) the ability to locate software componentsthat match certain design criteria; and (b) the facility todeploy located software components. As a result, whendevelopment methods emphasise either better matching,or easier deployment, re-use is further promoted.

In unified user interface development, the organisa-tion of dialogue components according to their associ-ated dialogue design context is a fundamental technicalproperty (see Fig. 10 outlining software componentrelationships), while the decision logic, which practicallydocuments the design role of each component, engagesnaming conventions that are directly employed in thetarget implementation for run-time indexing purposes(see the excerpt from Table 3). Such design-orientedorganisation of dialogue implementation units facilitatesstraightforward location within the overall systemimplementation, through design parameters.

When for particular container components all thevarious contained components are ‘‘unimorphic’’, i.e.,they are supplied with a single invariant version, con-tainer or contained component parameterisation is notapplied, thus making containers implementationallydependent on the contained components. This decision

187

Page 24: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

practically eliminates the chances for potential re-use ofeither container or contained component categories.However, in unified user interface development, theimplementation of containers has to be independent ofthe corresponding contained components, by employingabstraction patterns, since the variability and theextensibility of contained components has to be sup-ported. The latter approach makes containers and con-tained components largely parameterised, enablingeasier deployment and re-use.

Overall, even though the unified user interfacedevelopment approach does not introduce any newtechniques for software re-usability, and has not beendesigned with a primary emphasis on re-usability assuch, it proposes a development discipline in whichre-usability is effectively reflected and promoted. In thiscontext, it is important to highlight the software engi-neering maturity of this specific development code ofpractice, which supports component extensibility,re-usability, and orthogonality, thus facilitating dynamicrun-time interface assembly.

6.3 The concept of the dynamic interface assembly

The concept of the dynamic interface assembly reflectsthe key run-time mechanisms to support adaptability inunified user interfaces. Previous work in adaptiveinteraction, involving techniques such as the detection ofuser attributes, adaptive prompting and localised lexical-level modifications (e.g., re-arranging menu options, oradding/removing operation buttons). The issue ofmaking the interface fit from the beginning to individualusers has been addressed in the past mainly as a con-figuration problem, requiring interface developers tosupply configuration editors so that end users could fitthe interface to their particular preferences. However,such methods are limited to fine-tuning some lexical-level aspects of the interface (e.g., tool-bars, menus),while they always require explicit user intervention, i.e.,there is no automation. In this context, the notion ofadaptability as realised in unified user interfaces offersnew possibilities for automatically-adapted interactions,while the architecture and run-time mechanisms toaccomplish dynamic interface assembly constitute aunique software engineering perspective.

Some similarities with dynamic interface assemblycan be found in typical Web-based applications deliv-ering dynamic content. The software engineering meth-ods employed in such cases are based on theconstruction of application templates (technologies suchas Active Server Pages by Microsoft—ASP or JavaServer Pages—JSP by JavaSoft, are usually employed),with embedded queries for dynamic information re-trieval, delivering to the user a Web page assembled on-the-fly. In this case, there are no alternative embeddedcomponents, just content to be dynamically retrieved,while the Web page assembly technique is mandatorywhen HTML-based Web pages are to be delivered to the

end-user (in HTML, each time the content changes, adifferent HTML page has to be written). However, incase a full-fledged embedded component is developed(e.g., as an ActiveX object or Java Applet), no run-timeassembly is required, since the embedded applicationinternally manages content extraction and display, as acommon desktop information retrieval application.

The implementation of unified user interfaces is or-ganised in hierarchically structured software templates,in which the key placeholders are parameterised con-tainer components. This hierarchical organisation, as ithas been reflected in the development excerpts, mirrorsthe fundamentally hierarchical constructional nature ofinterfaces. The ability to diversify and support alterna-tives in this hierarchy is due to containment parame-terisation, while the adapted assembly process is realisedby selective activation, engaging remote decision makingon the basis of end-user and usage-context information.

In Fig. 13, the concept of parametric container hier-archies is illustrated. Container classes expose their con-tainment capabilities and the type of supported containedobjects by defining abstract interfaces (i.e., abstract OOPclasses) for all the contained component classes. Theseinterfaces, defined by container class developers, consti-tute the programming contract between the containerand the contained classes. In this manner, alternativederived contained-component classes may be instantiatedat runtime as constituent elements of a container. Fol-lowing the definition of the polymorphic factor PL, whichprovides a practical metric of the number of possiblealternative run-time configurations of a component, thePL of the top level application component gives thenumber of the possible alternative dynamically assem-bled interface instances (see also Fig. 2).

From a programming point of view, in the AVANTIbrowser, the activation control of dialogue componentsfor run-time assembly has been mainly realised throughtypical library function calls. Such function calls engageobject instances corresponding to dialogue components,without employing any component-ware technology.Hence, this run-time assembly behaviour has beenaccomplished without the need of locating, fetching, andcombining components together. Nevertheless, effortshave been devoted to applying and testing the latterapproach in real practice, by employing a component-ware technology (DCOM/ActiveX) for a limited numberof dialogue components. This required a more labour-intensive implementation approach (from a C++ pointof view, while isolated testing of components with VisualBasic was far easier), for packaging dialogues to makethem component-enabled, as well as for further acti-vating and using them at runtime. However, there aresome evident advantages:

– Dialogue components need not be carried altogether,but can be dynamically loaded, thus promoting a thinDPC implementation;

– In effect, the core logic of the DPC, apart from dia-logue components, can be also packaged as a com-

188

Page 25: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

ponent itself, making it reusable across differentapplications;

– Automatic updates and extensions of components aredirectly supported, enabling new versions, or evennew dialogue components (addressing more user- andusage-context attribute values), to be centrally in-stalled in appropriate component repositories.

6.4 An emphasis on extensibility, maintenance,and code sharing

It is well known in software engineering that ‘‘the morewe share, the less we have to change’’. This principlereflects the ability to globally apply changes in commonsoftware artefacts ‘‘in one shot’’, assuming they are re-used as they are by different components of a softwareapplication. On the opposite side, through replicationand customisation of similar but slightly different soft-ware structures, one has to pay the overhead of manu-ally updating all distinct occurrences of the replicatedsoftware structure. The latter approach is known tointroduce ‘‘entropy increase’’ in software development,while requiring the repetition of update schemes insimilar modules, residing in different source code bases.The theoretical solution to the replication syndrome isthe mechanism of abstraction inherent in OOP lan-guages, emphasising sharing of responsibilities, withstandardised object collaborations. In unified user

interface development, those principles are directly re-flected in the organisational structure of dialogue com-ponents, promoting code sharing for the invariantinterface components, with parametric containment ofthe variant design alternatives. Such an approach hasbeen practically proved to lead to more easily extensibleand maintainable systems, reducing the effort necessaryto introduce variations of functionality or to globallymodify common interface artefacts.

6.5 Additional implementation issues

6.5.1 Tool limitations in open parametric containment

The concept discussed above is practically constrainedby limitations on the physical containment regulationsimposed by container interaction objects in differenttoolkits. More specifically, concrete contained compo-nents are bound to be developed using the same inter-action object library as the particular container instancecomponent. Two possible development policies havebeen tested for this scenario:

– Trying to mix containers from different toolkitsthrough the same programming language. This optiondid not work with GUI toolkits at all, starting evenfrom the compile/link phases. Different toolkits sim-ply define common super-classes, posing either com-pile conflicts (e.g., ‘‘replicate definitions’’ has been the

Fig. 13 The concept ofparametric containerhierarchies

189

Page 26: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

most common error) or link conflicts (libraries cannotbe combined together).

– Trying to employ component technologies, takingadvantage of inter-operability among different com-ponent technologies. In particular, the followingimplementations have been tested: Java containers/contained classes as Java beans, and MFC containers/contained classes as ActiveX components. Overall,this has proved to be a quite demanding task, while itdid not work in both directions: (a) the use of ActiveXcontainers containing Java Beans components worksthrough the use of the Java Beans Bridge for Active X,the latter being part of Java plug-ins; the ‘‘bridge’’enables packaging of Java Beans in the form of ActiveX components (the ‘‘bridge’’ at runtime looks to thesystem like an ActiveX control, while at the sametime giving to Java Beans a Java environment); and(b) the use of JavaBeans containers with embeddedActive X components required the manual re-imple-mentation of ActiveX contained components in theform of Java Beans, through the use of the Java BeansMigration Assistant for Active X; therefore, excludingchances for run-time inter-operability.

6.5.2 The exposure of a generic interactionmonitoring API

Following the micro-architecture of the DPC, themanagement of monitoring requests as well as theposting of dynamically collected monitoring data tothe UIS is centralised to a single embedded component.This does not merely reflect a logical organization, but itis the implementation approach that has been employedin the context of the AVANTI browser. To achieve sucha centralised monitoring control, all dialogue compo-

nents need to expose (i.e., implement) one commonprogramming interface (i.e., an abstract class), forinstalling/un-installing monitoring functionality (mainlyevent handlers). This particular API (see Fig. 14 for anoutline of the dialogue component implementationcomponents), enabled the registration of appropriatehandlers by the monitoring manager, to be called byrespective dialogue components each time interactionevents subject to monitoring are detected. The moni-toring manager would then package such events bymeans of monitoring data messages, to be subsequentlyposted to the UIS.

It should be noted that, to make the abstract moni-toring API as generic as possible, it was decided toemploy typical string representations for event catego-ries and event data, making each component responsiblefor dynamic interpretation and encoding. Thus, weavoided the need for updating the monitoring managereach time new event classes or event data types emerged.Moreover, this approach enabled us to package the corefunctionality of the DPC as a single invariant re-usablecomponent, clearly separated from the implementationof the dialogue components.

7 Conclusions

Equal participation of all citizens in the informationsociety is a recognised socio-technical imperative thatpresents software vendors with a variety of challenges.Eliminating or reducing the digital divide necessitatesthe delivery of a wide range of software applications andservices, for a variety of computing platforms, accessibleand usable by diverse target user groups, includingpeople with disabilities and elderly people. Softwarefirms will be encouraged to work more actively towardsthis objective if a cost-effective engagement strategy canbe clearly formulated. The required technical knowledgeto build user- and usage-context adapted interfaces mayinclude user modeling, task design, action patterns,cognitive psychology, rule-based systems, networkcommunication and protocols, multi-platform inter-faces, component repositories, and core software engi-neering. Moreover, the development of monolithicsystems, encapsulating the necessary computable arte-facts from those domains of required expertise, is not aviable development strategy. Additionally, softwaredevelopers prefer incremental engagement strategies,allowing a stepwise entrance to new potential markets,by delivering successive generations of productsencompassing layers of novel characteristics. Similarly,the development of software applications supportinguniversal access, i.e., software applications accessible byanyone, anywhere, and at any time, requires a concretestrategy supporting evolutionary development, softwarere-use, incremental design, and modular construction.

The unified user interface development discussed inthis paper claims to offer a software engineering prop-

Fig. 14 The outline of the dialogue component implementationcomponents

190

Page 27: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

osition that consolidates process-oriented wisdom forconstructing universally accessible interactions. Evolu-tion, incremental development and software reuse aresome of the fundamental features of unified user inter-face development. These are reflected in the ability toprogressively extend a unified user interface, by incre-mentally encapsulating computable content in the dif-ferent parts of the architecture, to cater for additionalusers and usage contexts, by designing and implement-ing more dialogue artefacts, and by embedding new rulesfor the decision-making logic. Such characteristics areparticularly important and relevant to the claimed fea-sibility and viability of the proposed software engineer-ing process and directly facilitate the practicalaccomplishment of universally accessible interactions.

The concept of unified user interfaces reflects a newsoftware engineering paradigm that addresses effectivelythe need for interactions automatically adapted to theindividual end-user requirements and the particularcontext of use. Following this technical approach,interactive software applications encompass the capa-bility to appropriately deliver ‘‘on-the-fly’’ an adaptedinterface instance, performing appropriate run-timeprocessing that engages:

– The utilisation of user- and usage-context orientedinformation (e.g., profiles), as well as the ability todetect dynamically user- and usage-context attributesduring interaction;

– The management of appropriate alternative imple-mented dialogue components, realising alternativeways for physical-level interaction;

– Adaptation-oriented decision making that facilitates:(a) the selection, before initiation of interaction, of themost appropriate dialogue components comprisingthe delivered interface, given any particular dialoguecontext, for the particular end-user and usage-contextprofiles (i.e., adaptability); and (b) the implementationof appropriate changes in the initially deliveredinterface instance, according to dynamically detecteduser- and usage-context attributes (i.e., adaptivity);

– Run-time component co-ordination and control, todynamically assemble or alter the target interface; thisuser interface is composed ‘‘on-the-fly’’ from the set ofdynamically selected constituent dialogue components.

The unified user interface development strategy pro-vides a distributed software-architecture with well-de-fined functional roles (i.e., which component does what),inter-communication semantics (i.e., which componentrequests what and from whom), control-flow (i.e., whento do what), and internal decomposition (i.e., how theimplementation of each component is internally struc-tured). One of the unique features of this developmentparadigm is the emphasis on dynamic interface assemblyfor adapted interface delivery, reflecting a softwareengineering practice with: repository-oriented compo-nent organisation, parametric containers with abstractcontainment APIs, and common interaction-monitoringcontrol with abstract APIs. Although the method itself is

not intended to be intensively prescriptive from the low-level implementation point of view, specific successfulpractices that have been technically validated in fieldwork regarding decision making and dynamic user-attribute detection, have also been discussed, focusingon micro-architecture details and internal functionaldecomposition.

The unified user interface development was presentedin several tutorials [37, 39, 40, 41], and was the devel-opment approach adopted in several research projectspartially funded by the European Commission2 -(TP1001 - ACCESS; IST-1999–20656 – PALIO; ACTSAC042 – AVANTI), or national government fundingagencies (e.g., EPET-II NAUTILUS).

In this context, this development method has beensystematically deployed and tested in practical situationswhere universal access for computer-based interactiveapplications and services was the predominant issue. Itintroduces the fundamental notion of adapted interfacedelivery ‘‘before initiation of interaction’’, and addressesthe technical challenges of coping with the inherent run-time dynamic interface assembly process.

The proposed approach establishes one possibletechnical route towards constructing universally acces-sible interactions: it enables incremental developmentand facilitates the expansion and upgrade of dialoguecomponents as an ongoing process, entailing the con-tinuous engagement and consideration of new designparameters, and new parameter values. It is anticipatedthat future research work may reveal alternative ap-proaches or methods. At the same time, further researchand development work for unified user interfaces isrequired to address some existing challenges, mainlyrelated to design issues (see Fig. 15).

Fig. 15 Further research and development work for unified userinterfaces

2 See the Acknowledgements section

191

Page 28: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

Following Fig. 15, one top-level issue concerns theway that specific varying user attributes affecting inter-action are to be identified. In other words, there is a needto identify diversity in those human characteristics thatare likely to dictate alternative dialogue means.

Subsequently, even if a set of those attributes is iden-tified, it is still unclear how to conduct a design process toproduce the necessary alternative dialogue artefacts forthe different values of those attributes. Hence, it is nec-essary to design for diversity, relying upon appropriatedesign rationale clearly relating diverse attribute valueswith specific properties of the target dialogue artefacts.Currently, there is only limited knowledge about how toperform effectively this transition from alternative user-attribute values to alternative design artefacts, and it canbe characterised as a rationalisation gap.

Finally, the issue of how to structure appropriatelyalternative patterns for diverse user attribute valuesshould be addressed in a way that the resulting designsare indeed efficient, effective and satisfactory for theirintended users and usage-contexts. Such a process re-quires appropriate evaluation methods, and the capa-bility to measure the appropriateness of designedartefacts. At present, this is still a ‘‘missing link’’, char-acterised as the measurability gap. Unless we are able toassert the appropriateness of the alternative dialogueartefacts designed for diverse user attributes, we cannotvalidate the overall dynamically delivered interface. Theinability to formulate and conduct such an evaluationprocess creates a validity gap.

Work currently underway, as well as future work, isexpected to address these issues in an attempt to bridgethe identified gaps.

Acknowledgements The authors wish to acknowledge with appre-ciation the guidance of the managing editor and the contribution ofthe anonymous reviewers in significantly improving the quality ofthe manuscript during the peer review process. The unified interfacedevelopment method has been originally defined in the context ofthe ACCESS TP1001 (the development platform for unified AC-CESS to enabling environments) project, partially funded by theTIDE Program of the European Commission, and lasted 36months (from January 1st, 1994 to December 31st, 1996). Thepartners of the ACCESS consortium are: CNR-IROE (Italy) -Prime contractor; ICS-FORTH (Greece); University of Hertfor-shire (UK); University of Athens (Greece); NAWH (Finland); VTT(Finland); Hereward College (UK); RNIB (United Kingdom);Seleco (Italy); MA Systems & Control (UK); PIKOMED (Fin-land). The first large-scale application of the unified interfacedevelopment method has been carried out in the context of theAVANTI AC042 (Adaptable and Adaptive Interaction in Multi-media Telecommunications Applications) project, partially fundedby the ACTS Program of the European Commission, and lasted 36months (from September 1st, 1995 to August 31st, 1998). Thepartners of the AVANTI consortium are: ALCATEL Italia, Siettedivision (Italy) - Prime Contractor; IROE-CNR (Italy); ICS-FORTH (Greece); GMD (Germany), VTT (Finland); University ofSiena (Italy), MA Systems and Control (UK); ECG (Italy);MATHEMA (Italy); University of Linz (Austria); EUROGICIEL(France); TELECOM (Italy); TECO (Italy); ADR Study (Italy).The PALIO ‘‘Personalised Access to Local Information and Ser-vices for Tourists’’ project (IST-1999–20656) is partly funded bythe Information Society Technologies Program of the EuropeanCommission—DG Information Society. The partners in the PA-

LIO consortium are: ASSIOMA S.p.A. (Italy)—Prime Contractor;CNR-IROE (Italy); Comune di Firenze (Italy); FORTH-ICS(Greece); GMD (Germany); Telecom Italia Mobile S.p.A. (Italy);University of Sienna (Italy); Comune di Siena (Italy); MA Systemsand Control Ltd (UK); FORTHnet (Greece). The NAUTILUS‘‘Unified Web Browser for People with Disabilities’’ project isfunded by EPET-II Programme (Operational Programme for Re-search & Technology of the General Secretariat for Research andTechnology, Hellenic Ministry of Development). The partners inthe NAUTILUS Consortium are: ICS-FORTH (Greece), TRDInternational SA (Greece) and NCDP (Greece).

References

1. ACCESS Project (1996) The ACCESS project—developmentplatform for unified access to enabling environments. RNIBPress, London

2. Akoumianakis D, Savidis A, Stephanidis C (1996) An expertuser interface design assistant for deriving maximally pre-ferred lexical adaptability rules. In: Proceedings of the 3rdWorld Congress on Expert Systems, Seoul, Korea, 5–9 Feb-ruary 1996

3. Benyon D (1984) MONITOR: a self-adaptive user-interface.In: Proceedings of the IFIP Conference on Human-ComputerInteraction: INTERACT ‘84 (vol. 1), Elsevier, Amsterdam

4. Blattner MM, Glinert JA, Ormsby GR (1992) Metawidgets:towards a theory of multimodal interface design. In: Proceed-ings of COMPSAC ‘92, IEEE Computer Society Press, NewYork

5. Browne D, Norman M, Adhami E (1990) Methods for buildingadaptive systems. In: Browne D, Totterdell M, Norman M(eds) Adaptive user interfaces, Academic Press, London

6. Browne D, Totterdell M, Norman M (eds) (1990) Conclusions.Adaptive user interfaces, Academic Press, London

7. Cockton G (1987) Some critical remarks on abstractions foradaptable dialogue managers. In: Proceedings of the 3rdConference of the British Computer Society, People & Com-puters III, HCI Specialist Group, University of Exeter, Cam-bridge University Press, Cambridge, UK

8. Cockton G (1993) Spaces and distances—software architectureand abstraction and their relation to adaptation. In: Schneider-Hufschmidt M, Kuhme T, Malinowski U (eds) Adaptive userinterfaces—principles and practice, Elsevier, Amsterdam

9. Cote Munoz J (1993) AIDA—an adaptive system for interac-tive drafting and CAD applications. In: Schneider-HufschmidtM, Kuhme T, Malinowski U (eds) Adaptive user inter-faces—principles and practice, Elsevier, Amsterdam

10. Coutaz J (1990) Architecture models for interactive software:failures and trends. In: Cockton G (ed) Engineering for human-computer interaction, Elsevier, Amsterdam

11. Dieterich H, Malinowski U, Kuhme T, Schneider-HufschmidtM (1993) State of the art in adaptive user interfaces. In:Schneider-Hufschmidt M, Kuhme T, Malinowski U (eds)Adaptive user interfaces—principles and practice, Elsevier,Amsterdam

12. Foley J, Kim W, Kovacevic S, Murray K (1991) GUIDE—anintelligent user interface design environment. In: Sullivan J,Tyler S (eds) Architectures for intelligent interfaces: elementsand prototypes, Addison-Wesley, Reading, MA

13. Gamma E, Helm R, Johnson R, Vlissides J (1995) Designpatterns, elements of reusable object-oriented software. Addi-son-Wesley, Reading, MA

14. Goldberg A (1984) Smalltalk-80: the interactive programmingenvironment. Addison-Wesley, Reading, MA

15. Grammenos D, Stephanidis C (2002) Interaction design of acollaborative application for children. In: Bekker MM,Markopoulos P, Kersten-Tsikalkina M (eds) Proceedings of theInternational Workshop Interaction Design and Children,Eindhoven, The Netherlands, 28–29 August, 2002

192

Page 29: Unified user interface development: the software engineering of … · 2011. 3. 14. · A third contribution concerns the specific architec-tural proposition that is part of the

16. Green M (1985) Report on dialogue specification tools. In:Pfaff G (ed) User interface management systems, Springer,Berlin Heidelberg New York

17. GUIB Project (1995) Textual and graphical user interfaces forblind people. The GUIB PROJECT—Public Final Report,RNIB Press, UK

18. Hartson R, Hix D (1989) Human-computer interface devel-opment: concepts and systems for its management. ACMComp Surv 21(1):241–247

19. Jacobson I, Griss M, Johnson P (1997) Making the reusebusiness work. IEEE Comp 10:36–42

20. Kawai S, Aida H, Saito T (1996) Designing interface toolkitwith dynamic selectable modality. In: Proceedings of the Sec-ond Annual ACM Conference on Assistive Technologies(ASSETS ‘96), Vancouver, Canada, 11–12 August 1996

21. Kobsa A (1990) Modeling the user’s conceptual knowledge inBGP-MS, a user modeling shell system. Comp Intellig 6:193–208

22. Kobsa A, Pohl W (1995) The user modeling shell system BGP-MS. User Model User Adapt Inter 4(2):59–106

23. Kobsa A, Wahlster W (eds) (1989) User models in dialog sys-tems. Springer, Berlin Heidelberg New York

24. Krasner GE, Pope ST (1988) A description of the model viewcontroller paradigm in the Smalltalk-80 system. J Obj-OrientProg 1(3):26–49

25. Mowbray TJ, Zahavi R (1995) The essential CORBA: systemsintegration using distributed objects. Wiley, New York

26. Myers B (1995) User interfaces software tools. ACM TransHum-Comp Inter 12(1):64–103

27. Mynatt E, Weber G (1994) Nonvisual presentation of graphicaluser interfaces: contrasting two approaches. In: Proceedings ofthe ACM Conference on Human Factors in Computing Sys-tems (CHI ‘94), ACM Press, New York

28. Puerta AR (1997) A model-based interface development envi-ronment. IEEE Soft 14(4):41–47

29. Savidis A, Stephanidis C (1995a) Developing dual interfaces forintegrating blind and sighted users: the HOMER UIMS. In:Proceedings of the ACM Conference on Human Factors inComputing Systems (CHI ‘95), Denver, Colorado, 7–11 May1995

30. Savidis A, Stephanidis C (1995b) Building non-visual interac-tion through the development of the rooms metaphor. In:Companion Proceedings of the ACM Conference on HumanFactors in Computing Systems (CHI ‘95), Denver, Colorado,7–11 May 1995

31. Savidis A, Stephanidis C (1998) The HOMER UIMS for dualuser interface development: fusing visual and non-visual inter-actions. Int J Interact Comp 11(2):173–209

32. Savidis A, Stephanidis C (2001) Development requirements forimplementing unified user interfaces. In: Stephanidis C (ed)User interfaces for all—concepts, methods, and tools, Law-rence Erlbaum, Mahwah, NJ

33. Savidis A, Stergiou A, Stephanidis C (1997a) Generic con-tainers for metaphor fusion in non-visual interaction: the

HAWK interface toolkit. In: Proceedings of the 6th Interna-tional Conference on Man-Machine Interaction IntelligentSystems in Business (INTERFACES ‘97), Montpellier, France,28–30 May 1997

34. Savidis A, Vernardos G, Stephanidis C (1997b) Embeddingscanning techniques accessible to motor-impaired users in theWINDOWS object library. In: Salvendy G, Smith MJ, KoubekRJ (eds) Design of computing systems: cognitive consider-ations. In: Proceedings of the 7th International Conference onHuman-Computer Interaction (HCI International ‘97), SanFrancisco, CA, 24–29 August 1997

35. Schwerdtfeger RS (1991) Making the GUI talk. BYTE16(12):118–128

36. Short K (1997) Component based development and objectmodeling. Texas Instruments Software, version 1.0

37. Stephanidis C, Akoumianakis D, Paramythis A (1999) Copingwith diversity in HCI: techniques for adaptable and adaptiveinteraction. Tutorial No. 11 of the 8th International Confer-ence on Human-Computer Interaction (HCI International ‘99),Munich, Germany, 22–26 August 1999

38. Stephanidis C, Paramythis A, Sfyrakis M, Savidis A (2001) Acase study in unified user interface development: The AVANTIWeb browser. In: Stephanidis C (ed) User interfaces forall—concepts, methods, and tools, Lawrence Erlbaum, Mah-wah, NJ

39. Stephanidis C, Savidis A, Akoumianakis D (1997) Unified userinterface development: tools for constructing accessible andusable user interfaces. Tutorial No.13 of the 7th InternationalConference on Human-Computer Interaction (HCI Interna-tional ‘97), San Francisco, CA, 24–29 August 1997

40. Stephanidis C, Savidis A, Akoumianakis D (2001a) Engineer-ing universal access: unified user interfaces. In: Tutorial of the1st Universal Access in Human-Computer Interaction Con-ference (UAHCI 2001), jointly with the 9th InternationalConference on Human-Computer Interaction (HCI Interna-tional 2001), New Orleans, LA, 5–10 August 2001

41. Stephanidis C, Savidis A, Akoumianakis D (2001b) Universallyaccessible UIs: the unified user interface development. Tutorialof the ACM Conference on Human Factors in ComputingSystems (CHI 2001), Seattle, Washington, 31 March–5 April2001

42. Szekely P, Luo P, Neches R (1992) Facilitating the explorationof interface design alternatives: the HUMANOID model ofinterface design. In: Proceedings of the the ACM Conferenceon Human Factors in Computing Systems (CHI 1992) ACMPress, New York

43. UIMS Developers Workshop (1992) A meta-model for the run-time architecture of an interactive system. SIGCHI Bullet24(1):32–37

44. Vergara H (1994) PROTUM—a Prolog based tool for usermodeling. Bericht Nr. 55/94 (WIS-Memo 10), University ofKonstanz, Germany

193