Google_Controlled Experimentation_Panel_The Hive

Download Google_Controlled Experimentation_Panel_The Hive

Post on 11-May-2015

499 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

<ul><li>1.(A Few) Key Lessons Learned Building LinkedIn OnlineExperimentation PlatformExperimentation Panel 3-20-13</li></ul> <p>2. Experimentation at LinkedIn Essential part of the release process 1000s of concurrent experiments Complex range of target populations based oncontent, behavior and social graph data Cater to a wide demographic Large set of KPIs 3. The next frontier KPIs Beyond CTR Multiple objective optimization KPIs reconciliation User visit imbalance Virality preserving A/B testing Context dependent novelty effect Explicit feedback vs. implicit feedback 4. Picking the right KPI can be tricky Example: engagement measured by # comments onposts on a blog website KPI1 = average # commentsper user B wins by 30% KPI2 = ratio of active (at least one posting) toinactive users A wins by 30% How is this possible?KPI1KPI2Do you want a smaller, highlyengaged community, or a larger, lessengaged community? 5. Winback campaign Definition Returning to the web site at least once? Returning to the web site with a certain level of engagement, possible comparable, more or a bit less than before the account went dormant? Example: reminder email at 30 days afterregistrationRegistered 335 Days Ago 4000 3500 3000 2500 2000 1500 Occurrence 10005000 Came back once at 30 days3 17 31 45 59 73 87101115129143157171185199213227241255269283297311325339 then went dormantLoyalty Distribution: Time since last visit 6. Multiple competing objectivesSuggest relevant groups thatone is more likely to participate inTalentMatch(Top 24 matches of a posted job for sale)Suggest skilled candidates whowill likely respond to hiringmanagers inquiriesSemantic + engagement objectives6 7. TalentMatch use case KPI: Repeated TM buyers 6m-1y window! Short-term proxywith predictivepower: Optimize for InMailresponse rate whilecontrolling forbooking rate andInMail sent rate 7 8. KPIs reconciliation How do you compare apples and oranges? E.g. People vs. Job recommendationsswap X% lift in job apps vsY% drop in invitations Value of an invitationvs. value ofa job application? Long term cascadingeffect on a set ofsite-wide KPIs 9. User visit imbalance Observed sample intended random sample Consider an A/B test on the homepage lastingL days. Your likely observed sample will have Repeated (&gt;&gt; L) obs for super power users L obs for daily users L/7 obs for weekly users NO obs for users coming less than every L days statistics Random effects models 10. Virality preserving A/B testing Random sampling destroys social graph Critical for social referrals Warm recommendations Wisdom of your friends social proof Core + fringe to mimimize WWW11 FB, 12 YahooGroup recommendations 11. Context dependent novelty effect Job recommendation algorithms A/B test first 2 weeks: 2X long term stationary lift TalentMatch no short-term novelty effect 12. Explicit feedback A/B testing Enable you to understand usefulness of aproduct/feature/algorithm with unequal depthText based A/B test! Sentiment analysis Reveal unexpected complexitiesE.g. Local means different things for different members Prevent misinterpretation of implicit user feedback! Help prioritize future improvements 12 13. References C. Posse, 2012: A (Few) Key Lessons Learned Building RecommenderSystems for Large-Scale Social Networks. Invited Talk, Industry PracticeExpo, 18th ACM SIGKDD Conference on Knowledge Discovery and DataMining, Beijing, China M. Rodriguez, C. Posse and E. Zhang. 2012. Multiple ObjectiveOptimization in Recommendation Systems. Proceedings of the Sixth ACMConference on Recommender Systems, pp. 11-18 M. Amin, B. Yan, S. Sriram, A. Bhasin and C. Posse. 2012. Social Referral:Using Network Connections to Deliver Recommendations. Proceedings ofthe Sixth ACM Conference on Recommender Systems, pp. 273-276 X. Amatriain, P. Castells, A. de Vries, C. Posse, 2012. Workshop onRecommendation Utility Evaluation: Beyond RMSE, Proceedings of theSixth ACM Conference on Recommender Systems, pp. 351-35213 </p>