setfusion visual hybrid recommender - iui 2014
DESCRIPTION
Slides of my presentation at IUI 2014, the visual Hybrid Recommender SetFusion - "See What you Want to See: Visual User-Driven Approach for Recommendation" http://dl.acm.org/citation.cfm?id=2557542 DEMO available: http://www.youtube.com/watch?v=9LwSx1V6YxkTRANSCRIPT
See What you Want to See: Visual User-Driven Approach
for Recommendation
Denis Parra, PUC Chile Peter Brusilovsky, University of Pittsburgh
Christoph Trattner, Graz University of Technology
IUI 2014, Haifa, Israel
Outline
• Short intro to some Challenges in Recommender Systems
• Our Approach to User Controllability (demo) • User Study & Results • Summary & Future Work
02/27/2014 D.Parra et al.~ IUI 2014 2
INTRODUCTION Recommender Systems: Introduction & Challenges addressed in this research!
3
* Danboard (Danbo): Amazon’s cardboard robot, in these slides represents a recommender system!
*
Recommender Systems (RecSys)
Systems that help people to find relevant items in a crowded item or information space (McNee et al. 2006)
02/27/2014 D.Parra et al.~ IUI 2014 4
Challenges of RecSys Addressed Here Traditionally, RecSys has focused on producing accurate recommendation algorithms. In this research, these challenges are addressed: 1. Human Factors in RecSys: Study controllability by
introducing a novel visualization that presents fusion of different recommenders
2. Evaluation: Use of Objective, Subjective & Behavioral metrics
02/27/2014 D.Parra et al.~ IUI 2014 5
Research Goals & User Studies Research Goal • To understand the effect of controllability on the
user engagement and on the overall user experience of a RecSys
(on this paper) Through • Two studies conducted using Conference Navigator:
02/27/2014 D.Parra et al.~ IUI 2014 6
!!!!
Program!
!!!!
Proceedings!
!!!!
Author List!
!!!!
Recommendations!
http://halley.exp.sis.pitt.edu/cn3/
WHY IUI SHOULD CARE: HCI + RECSYS COMMUNITY
Previous research related to this work / Motivating results from TalkExplorer study!
7/22/2013 D.Parra ~ PhD. Dissertation Defense 7
TasteWeights (Bostandjev et ala 2012)
7/22/2013 D.Parra ~ PhD. Dissertation Defense 8
Preliminary Work: TalkExplorer • Adaptation of Aduna Visualization in CN • Main research question: Do fusion (intersection) of
contexts of relevance improve user experience?
7/22/2013 D.Parra ~ PhD. Dissertation Defense 9
Center user
CN user
Recommender Recommender
Cluster with intersection of entities
Cluster (of talks) associated to only one entity
SETFUSION: USER-CONTROLLABLE HYBRID INTERFACE
10
Our Proposed Interface: SetFusion
02/27/2014 D.Parra et al.~ IUI 2014 11
Our Proposed Interface - II
02/27/2014 D.Parra et al.~ IUI 2014 12
Traditional Ranked List Papers sorted by Relevance. It combines 3 recommendation approaches.
Our Proposed Interface - III
02/27/2014 D.Parra et al.~ IUI 2014 13
Sliders Allow the user to control the importance of each data source or recommendation method
Interactive Venn Diagram Allows the user to inspect and to filter papers recommended. Actions available: - Filter item list by clicking on an area - Highlight a paper by mouse-over on a circle - Scroll to paper by clicking on a circle - Indicate bookmarked papers
Mixed Hybridization: Item Score
7/22/2013 D.Parra ~ PhD. Dissertation Defense 14
M: The set of all methods available to fuse rankreci,mj : rank–position in the list of a recommended item reci : recommended item i mj, : recommendation method j Wmj : weight given by the user to the method mj using the controllable interface |Mreci| represents the number of methods by which item reci was recommended
Slider weight
RESEARCH: DETAILS & RESULTS Description and Analysis of the results of the 3 user studies!
Studies: CSCW 2013 & UMAP 2013
02/27/2014 D.Parra et al.~ IUI 2014 16
CSCW 2013
Conditions Static List
Interactive SetFusion
# Attendants ~400
# RecSys Users
15 22
Study type Between Subjects
UMAP 2013
Interactive SetFusion
~ 100
50
1 group
Preliminary User study: Here we learned that the Interactive interface had a positive effect on user behavior and perception of the recsys
Second study: Only interactive interface
CHANGES: 1. Preference Elicitation:
In CSCW we avoided cold start. In UMAP we had no constraints
2. Use of the ratings to update the recommended items
3. Tuning of Content-based recommender
Comparing CSCW and UMAP
02/27/2014 D.Parra et al.~ IUI 2014 17
(Only Interactive Interfaces) CSCW 2013 UMAP 2013
# Users exposed to recommendation 84 95
# Users who used the recommender 22 ( ~ 26 %) 50 ( ~52.6 %)
# Users bookmarked papers 6 ( ~ 27.2 %) 14 (~28 %)
# Talks bookmarked / user avg. 28 / 4.67 103 / 7.36
Average User rating 3.73 / 10 ( ~45.4 %) 3.62 / 8 (~16%)
Usage at Recommender Page
# Talks explored (user avg.) 16.84 14.9
# People returning 7 (~31.8%) 14 (28%)
Average time spent in page (seconds) 261.72 353.8
Comparing CSCW and UMAP
02/27/2014 D.Parra et al.~ IUI 2014 18
(Only Interactive Interfaces) CSCW 2013 UMAP 2013
# Users exposed to recommendation 84 95
# Users who used the recommender 22 50
# Users bookmarked papers 6 ( ~ 27.2 %) 14 (28 %)
# Talks bookmarked / user avg. 28 / 4.67 103 / 7.36
Average User rating 3.73 / 10 ( ~45.4 %) 3.62 / 8 (~16%)
Usage at Recommender Page
# Talks explored (user avg.) 16.84 14.9
# People returning 7 (~31.8%) 14 (28%)
Average time spent in page (seconds) 261.72 353.8
Comparing CSCW and UMAP
02/27/2014 D.Parra et al.~ IUI 2014 19
(Only Interactive Interfaces) CSCW 2013 UMAP 2013
# Users exposed to recommendation 84 95
# Users who used the recommender 22 50
# Users bookmarked papers 6 ( ~ 27.2 %) 14 (~28 %)
# Talks bookmarked / user avg. 28 / 4.67 103 / 7.36
Average User rating 3.73 / 10 ( ~45.4 %) 3.62 / 8 (~16%)
Usage at Recommender Page
# Talks explored (user avg.) 16.84 14.9
# People returning 7 (~31.8%) 14 (28%)
Average time spent in page (seconds) 261.72 353.8
From the Final Survey
CSCW 2013 (11 users)
UMAP 2013 (8 users)
I don’t think that Conference Navigator needs a Recommender System
M = 2.36, S.E. = 0.2
M = 1.5 , S.E. = 0.21 (p < 0.05)
I would recommend this system to my colleagues
M = 3.36, S.E. = 0.28
M = 4.25, S.E. = 0.33 (p < 0.05)
02/27/2014 D.Parra et al.~ IUI 2014 20
- Users perceived SetFusion significantly as a more useful tool in UMAP than in CSCW
CONCLUSIONS & FUTURE WORK
Summary of Results
• From Study 1 we showed that User Controllability had an effect on the user experience with RecSys.
• Comparing SetFusion in Study 1 and Study 2: – A natural elicitation setting (UMAP) allowed users to
be more engaged on using the system for the task of the interface: bookmark papers recommended.
– Users also perceived the system as more useful in UMAP 2013.
– Ratings are a form of giving user control, a big lesson from Study 1: if you ask user for feedback, use it!
02/27/2014 D.Parra et al.~ IUI 2014 22
Limitations & Future Work
• Apply our approach to other domains (fusion of data sources or recommendation algorithms)
• Find alternatives to scale the approach to more than 3 sets, potential alternatives: – Clustering and – Radial sets
• Consider other factors that might interact with the user experience: – Controllability by itself vs. minimum level of accuracy
02/27/2014 D.Parra et al.~ IUI 2014 23
THANKS! QUESTIONS? [email protected]