aleksandrovsky boris presentation on real time search at yammer @ lucene revolution 2011

48
Realtime revolution at work REAL-TIME SEARCH AT YAMMER May 25, 2011 By Boris Aleksandrovsky http://www.linkedin.com/in/baleksan Yammer, Inc. http://www.linkedin.com/in/baleksan

Upload: lucid-imagination

Post on 07-Mar-2016

216 views

Category:

Documents


0 download

DESCRIPTION

Real Time search at yammer

TRANSCRIPT

Realtime revolution at work

REAL-TIME SEARCH AT YAMMER

May 25, 2011By Boris Aleksandrovsky http://www.linkedin.com/in/baleksanYammer, Inc.http://www.linkedin.com/in/baleksan

2

• Communication is hard, search is harder• What me grammar?• Private language• Conversational language• Time compressed• Transient• Poorly organized• Authority is suspect• Social pressures

33

4

Information

Facts

Knowledge

Attention

Engagement

Retention

Challenges - From information to knowledge

4

Messages

Metadata

Personalized Search

5

Agenda

• Background• Why search?• Indexing• Search• Tools and methodologies• Lessons learned• Future• Q&A

6

: Putting Social Media to Work

Knowledge Management:Document-oriented

Enterprise Collaboration:

Outcome-focusedSocial Media:People-centric

Yammer makes work• Real-time, Social, Mobile• Collaborative, Contextual• More Human!

Similar to:• Facebook• Twitter• Wikis• Groups

7

Yammer: The Enterprise Social Network

• Messaging and Feeds• Direct Messaging• User Profiles• Company Directory• Groups (Internal)• Communities (External)• File Sharing• Applications• Integrations• Web, Desktop, Mobile, Tablet• Translations• Network Consultation and

Support

Easy. Shared. Searchable. Real-time. Where your company’s knowledge lives.

8

100,000+ companies, including 85% of the Fortune 500 – and growing.

9

What do you discuss at work, and with whom?

What do our employees think of

our 401K program? Is everybody saving?

What’s the latest with the XYZ

account?

What are our recommendations for

financial and regulatory reform

given the latest news about…?

What will be discussed at our Quarterly Sales

Kickoff?

Where can I find out more about customer

events here at the ABC conference? Who’s free

to meet up?

How can my team better prepare for our next product

release?

Who has any fresh

ideas for…

• Who do you need to communicate with, across the company?• How often are the same questions asked? • Who has the answers? Who has new ideas? Who can help?

Who will I be working with on

this new project?

1010

11

Search use case - Transient Awareness

• Reverse-chronological• Simple queries• Facet

• Date• Sender• Group

12

Search use case - Knowledge Exploration

• Complicated relevance story• tf/idf• popularity• engagement• social distance

• Complicated queries• Facet

• Date• Sender• Group• Object type

13

Challenges for Yammer’s search engine

• More knowledge is generated in realtime• Availability latency < 1 sec• Not always well formed

• Complicated relevance story• experts and their reputation• popularity• social graph• tagging/topics• engagement signals• timeliness• location

14

Team• 2 engineers• 8 man months• Lots of fun

14

15

Indexing

• DB to replica

15

1616

17

Replication

• Independent near-replicas based on a single distributed source of truth

• Can (will) get out of sync• Automatic monitoring of replication quality

• Are replicas out of sync with other replicas?• number of docs• alert > X

• Are replicas out of sync with the DB?• statistical sample of docs

17

18

Indexing

• In-replica to index

18

1919

30s

20

Why is it hard?

•No timeliness guarantee•Fragmentation•Out-of-order deliveries•Index dependencies

• Need to denormalize the information•Need to build for network partition tolerance and redundancy

•But • Eventual consistency• Eventual delivery

20

21

How do we cope?

•Out of order delivery source of (most) evil•?

• A) Assure in-order delivery• buffer and wait

• degrades performance, availability and timeliness and is only very eventual consistent

• B) Minimize probability and ignore• timestamp precision • clock skew

• C) Arbitrate• timestamp / vector clocks• semantics• need to index lifecycle events

•Need to build for network partition tolerance and redundancy

•But • Consistency guarantee• Eventual delivery

21

22

Delete-update race

• [create Message “hello” id=5 ts=12:34:39]• [delete Message “hello there” id=5 ts=12:45:01]• [modify Message “hello there” id=5 ts=12:45:01]

22

id timestamp tombstone

5 12:34:39 no

5 12:45:01 yes

23

Multiple update race

• [create Message “hello” id=5 ts=12:34:39]• [modify Message “hello there now” id=5 ts=12:45:01]• [modify Message “hello there” id=5 ts=12:45:01]

23

id timestamp text

5 12:34:39 hello

5 12:45:01 hello there now

24

Dupes

• [create Message “hello” id=5 ts=12:34:39]• [like Message id=5 userId=3 ts=12:45:01]• [like Message id=5 userId=3 ts=12:45:02]• [unlike Message id=5 userId=3 ts=12:45:04]

24

id timestamp numLikes

5 12:34:39 0

5 12:45:01 1

5 12:45:02 1

5 12:45:04 0

25

Thread example

25

26

Zoie• Realtime indexing system • Open sourced by LinkedIn• Used by LinkedIn in production for about 3 years• Deployed at dozen or so locations• Thanks Xiaoyang Gu, Yasuhiro Matsuda, John Wang and Lei Wang

27

Zoie• Push events into buffer and the transaction log• Push buffer into Zoie• When Zoie commits, transaction log is truncated.

28

Indexing HA

• Cluster queue systems• Round-robin of Rabbits introduce further out-of-order

problems.• Transaction log

• Between RabbiMQ dequeue and Zoie disk commit

28

29

Dual indexing

• Primary for serving out• Secondary for reindexing

• Verify secondary index consistency• foreach replica do

• shutdown• mv secondary to primary• restart

• Availability should not be affected except for slight chance of system failure

29

30

Index consistency problems

• Detect• integrity check against the :source of truth:

• Reindex• gaps• whole• reindex into secondary, swap with primary

• Repair• patch in place• run on restart

30

31

Search

• <insert animated architecture slide>

31

32

Goal

• 50/50-500/100 per partition• 50M docs• 50 msec P75 - 500 msec P99• 100 qps

32

34

Payload• Payload is usually small json object• For security reasons only ids and scores are send out• One page (usually 10 items) x 6 index types.

34

35

Payload

35

36

Web Server

• Jersey over Jetty• http://jetty.codehaus.org/jetty/

• Custom configuration• tuned to the required 100 qps• generally impeccable, occasional lock contention

• http://jsr311.java.net/• Annotation driven• Much easier to test

36

37

Search master

• More like a router• Knows about partitioning scheme• Performs load normalization

• Call all, take the first• Possible to use multicast

• Round Robin• switch to for scale

• DLB (Least busy)• Maintains primary SLA metrics

37

38

Partitioning

• Simple Jenkins 64bit hash of networkId• 2 level hash to split large partitions• Exception list to split large partition• Limitation: Cannot partition inside a single network • Repartitioning story is expensive• Consistent hashing?

38

39

Testing

• Indexing• Idempotent• Out-of-order delivery• Duplicate and incomplete docs tolerance• 10K docs delivered in random order with X% of

dupes and Y% incomplete records • Search

• Small manual index by recording event• Unit style tests (testng) with Asserts

39

40

Production

• Measure• Hardware is cheap, people are not

• People require more maintenance• Have enough redundancy

40

41

Metrics

• JVM, Queue, Logging and Configuration

42

Metrics

• Gauges

43

Metrics

• Meters

44

Metrics

• Timers

45

Metrics

• https://github.com/codahale/metrics

46

Lessons

• Do not underestimate your data model• Tradeoff between consistency, RT availability and

correctness• Measure• Flexible partitioning scheme• Data recovery plan

47

Future

• Dynamic routing• Zookeeper

• Partition rebalancing• Multiple sub-partitions with different SLAs• Work on relevancy• Multiple languages• Document parsing• External data • Scala

48

Q&A Session: What’s On Your Mind?