context learning can improve user interaction

Post on 19-Jan-2016

24 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Context Learning Can Improve User Interaction. Sushil J. Louis, Anil K. Shankar Evolutionary Computing Systems Lab (ECSL) Department of Computer Science and Engineering University of Nevada, Reno http://www.cs.unr.edu/~anilk anilk@cs.unr.edu sushil@cs.unr.edu. Current UIs can be improved. - PowerPoint PPT Presentation

TRANSCRIPT

Context Learning Can Improve User Interaction

Sushil J. Louis, Anil K. ShankarEvolutionary Computing Systems Lab (ECSL)

Department of Computer Science and EngineeringUniversity of Nevada, Reno

http://www.cs.unr.edu/~anilkanilk@cs.unr.edu

sushil@cs.unr.edu

Current UIs can be improved

• Hardware– Keyboard, mouse, clock

• Software– GUI

• Little personalization, no long term-memory

• Little use of context• Advances in speech,

vision, and text analysis have not been well integrated

Can extended context improve UI

• What sensors should we use?

• How do we use extended context to improve user interaction?– Can we personalize interaction– Personalized transportable UI

PC is a stationary robot

Simple sensors provide context• Good vision, speech

recognition, and image or speech understanding are hard AI problems

• What can we do with simple sensors?– Object recognition

versus motion detection– Speech recognition

versus speech detection– Keyboard activity– Mouse activity– Selected processes

Simple context allows richer user interaction

• If there is no one in the room should I pop up a scheduled appointment?

• If there is someone in the room should I remind Jane?

• Should I turn down my music player when the telephone rings?

• Should I pause the current song when Jane leaves the room?

…But every user has different

answers!...

SycophantSycophant uses ML techniques to learn context to action mappings

• SycophantSycophant is a calendaring application that learns to predict preferred reminder actions

• SycophantSycophant stores user interaction and context

• SycophantSycophant learns to predict reminder type

Related Work

• Reba (Kulkarni 1992) – PC is a stationary robot• Bailey and Adamczyk, 2004 – Interruptions disrupts

user’s emotional state and task performance• Hudson, Fogarty, et al, 2003 – predict interruptibility

from context. Wizard of Oz study (simulated sensors) achieved 82.4% accuracy

• SycophantSycophant learns whether or not to interrupt the user as well as how to interrupt the user

• SycophantSycophant uses real sensors

SycophantSycophant uses simple context to predict action

• Sensors for context– Keyboard, mouse– Motion:

http://motion.sourceforge.net and a cheap logitech webcam

– Speech: http://www.speech.cs.cmu.edu the Sphinx speech recognition engine. We only DETECT speech

– Five processes: java, bash, terminal, xscreensaver, mozilla

• SycophantSycophant reminder actions (Four classes)– Visual (Popup), Speech (TTS),

Neither, Both

User has to provide feedback on action

suitability

SycophantSycophant stores sensor data• For each sensor and process we store the

following data if the sensor was activated (15 sec intervals)– Any5 : any in 5 minute interval– All5 : all 5 minutes– Any1 : any in 1 minute interval– All1 : all 1 minute– Immed: in the last 15 seconds– Count : number of times sensor active in last 5

minutes

• User

((4 sensors + 5 ((4 sensors + 5 processes) X 6 processes) X 6

derived values + 1 derived values + 1 user) = 55 total user) = 55 total

featuresfeatures

SycophantSycophant uses WEKA ML tools

• Zero-R: predicts majority class

• One-R: one level decision tree testing one attribute

• J48 : Decision tree like C4.5

• Bagging: Voting over N decision trees

• LogitBoost: Numerical model

• Naïve Bayes: Bayes

Results • Performance of

decision tree inducer with different number of features

• Run J48 on all features, then choose most significant N features

• Show performance on N features with J48Not much difference in performance with fewer features

Results: Predict user action

• Performance of different ML algorithms on 25 feature data set on four class problemSmall differences in performance

Results: Two class problemClass1: Remind, Class 2: No reminder

• Significant increase in performance• From 65% to 80%

Results

• Sycophant performs at 65% on four class problem

• Sycophant performs at 80% on two class problem

• Removing motion and Removing motion and speech detectors speech detectors results in a results in a statistically significant statistically significant decrease in decrease in performanceperformance

• Sample Rules: – IF Keyboard Any5

&& speech count > 2 && no motion in last 1min && appoint time > 1220 THEN generate Speech AND Popup reminders

– IF Keyboard Any5 && speech count > 2 && keyboard Any1 THEN generate Speech only

Summary

• Sycophant uses machine learning tools to learn a mapping from user context to user actions

• Simple context provides good features• Motion and speech sensors leads to

statistically significant performance improvement

• 65% accuracy on four class problem• 80% accuracy on two class problem

Future work

• We are developing a general architectural framework for a context learning layer for all applications

• Improve performance• We need more studies with other users

and different types of users• Feature subset selection• Classifier systems

Acknowledgements

• Office of Naval Research – Contract Number N00014030104

• Evolutionary Computing System Lab (ECSL)– Chris Miles– Kai Xu– Ryan Leigh– http://ecsl.cs.unr.edu

• Anil K. ShankarAnil K. Shankar– http://www.cs.unr.edu/~anilk– Code, other papers

top related