peer review in the liquidpub project

13
Reviewing Peer Review

Upload: aliaksandr-birukou

Post on 24-Jan-2015

337 views

Category:

Education


3 download

DESCRIPTION

Slides presented at the 2nd Snow Workshop (http://wiki.liquidpub.org/mediawiki/index.php/Second_Workshop_on_Scientific_Knowledge_Creation%2C_Dissemination%2C_and_Evaluation)

TRANSCRIPT

Page 1: Peer Review in the LiquidPub project

Reviewing Peer Review

Page 2: Peer Review in the LiquidPub project

Metric DimensionsQuality

FairnessEfficiency

Statistics

Kendall Distance

Divergence

Disagreement

Biases

Robustness

Unbiasing Effort invariant alternatives

Effort vs. quality

Min. criteria

Page 3: Peer Review in the LiquidPub project

Quality-related Metrics: real vs. ideal

• Real peer review ranking vs. ideal ranking▫ Ideal ?

Subjective vs. Objective But each process could/should define approximate

indicators of quality like: citations, downloads, community voting, success in a second phase, publication, citations, patents…

• IF an approximate ideal ranking is available we can measure the difference in various ways, e.g▫Kendall distance / Kendall rank correlation▫Divergence metric

Page 4: Peer Review in the LiquidPub project

Divergence Metric

©√

Nt = 1/nNormalized t

Nt = n/n

N-D

iver

gen

ceNdiv(1,n, C) = n-1/n

©√

NDivρ iρ a (t,n,C ) = pt (i)wii=0

t

∑e.g.

NDivρ iρ a (1,n,C ) = p1(0)1

1 ⎛ ⎝

⎞ ⎠+ p1(1)

0

1 ⎛ ⎝

⎞ ⎠

=n −1

n

1

1 ⎛ ⎝

⎞ ⎠

....

pt (i) =CitCt−i

n−t

Ctn ;wi =

t − i

t

When the second ranking is random, we have:

indipendent

correlated

inv. correlated

Page 5: Peer Review in the LiquidPub project

prior vs. after discussion

NDiv(53,206,206) = 0, 36 ca. 74 (36%) contributions have been effected by the discussion phase

Page 6: Peer Review in the LiquidPub project

Results: peer review ranking vs. citation count

6

Div

Normalized t

Page 7: Peer Review in the LiquidPub project
Page 8: Peer Review in the LiquidPub project

Fairness

•Definition: A review process is fair if and only of the acceptance of a contribution does not depend on the particular set of PC members that reviews it

•The key is in the assignment of a paper to reviewers: a paper assignment is unfair if the specific assignment influences (makes more predictable) the fate of the paper.

Page 9: Peer Review in the LiquidPub project

Computed Normalized Rating Biases

C1 C2 C3 C4

top accepting 2,66 3,44 1,52 1,17

top rejecting -1,74 -2,78 -2,06 -1,17

> + |min bias| 13% 5% 9% 7%

< - |min bias| 12% 4% 8% 7%

         

C1 C2 C3 C4

Unbiasing effect (divergence) 13% 9% 11% 14%

Unbiasing effect (reviewers affected) 10 16 5 4

Page 10: Peer Review in the LiquidPub project

Disagreement metric

•Through this metric we compute the similarity between the marks given by the reviewers on the same contribution.

•The rationale behind this metric is that in a review process we expect some kind of agreement between reviewers.

10/13

Page 11: Peer Review in the LiquidPub project

Disagreement vs number of reviews

Page 12: Peer Review in the LiquidPub project

The road aheadReal-Time accuracy estimation

Speed Ranking

Ranking vs. marking

Page 13: Peer Review in the LiquidPub project

Thank you