quantitative data c l e a n i n g f o r large databasesdb.cs.berkeley.edu/jmh/talks/qdb09.pdf ·...

78
QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

Upload: others

Post on 17-Aug-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES

JOSEPH M. HELLERSTEIN XXXXX

1 . 3I D E N T I F Y I N G C O M P O N E N T S

U n i v e r s i t y o f C a l i f o r n i a

u n i v e r s i t y o f c a l i f o r n i a Berkeley

Page 2: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

BACKGROUND

a funny kind of keynotea trip to the library

robust statistics, DB analytics

some open problems/directionsscaling robust stats, intelligent data entry forms

J. M. Hellerstein, “Quantitative Data Cleaning for Large Databases”, http://db.cs.berkeley.edu/jmh/papers/cleaning-unece.pdf

Page 3: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

TODAY

backgroundoutliers and robust statisticsmultivariate settingsresearch directions

Page 4: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

QDB ANGLES OF ATTACK

data entrydata modeling, form design, interfaces

organizational managementTDQM

data auditing and cleaningthe bulk of our papers?

exploratory data analysisthe more integration, the better!

Page 5: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

CULTURAL VALUES: WHAT IS A VALUE?

DB View: data Stat View: evidencedescriptive statistics inductive (inferential) statistics

model-free (nonparametric) model the process producing the data (parametric)

+ works with any data+ no model fitting magic

+ probabilistic interpretation likelihoods on values

imputation of missing data forecasting future data

Page 6: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

TODAY

backgroundoutliers and robust statisticsmultivariate settingsresearch directions

Page 7: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

DAD, WHAT’S AN OUTLIER?

Page 8: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

FAR FROM THE CENTER

centerdispersion

Page 9: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

FAR FROM THE CENTER

centerdispersion

Normal distribution!a.k.a Gaussian, bell curvemean, variance

Page 10: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

CENTER/DISPERSION(TRADITIONAL)

ages of employees (US)12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 68 450

0 100 200 300 400

0.00

00.

005

0.01

00.

015

density(age)

N = 19 Bandwidth = 9.877

Dens

ity

Page 11: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

0 100 200 300 400

0.00

00.

005

0.01

00.

015

mean and density(age)

N = 19 Bandwidth = 9.877

Dens

ity

CENTER/DISPERSION(TRADITIONAL)

ages of employees (US)12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 68 450

mean 58.52632

Page 12: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

0 100 200 300 400

0.00

00.

005

0.01

00.

015

density(age) and normal curve

N = 19 Bandwidth = 9.877

Dens

ity

CENTER/DISPERSION(TRADITIONAL)

ages of employees (US)12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 68 450

mean 58.52632

variance 9252.041

Page 13: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

CENTER/DISPERSION(TRADITIONAL)

ages of employees (US)12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 68 450

0 100 200 300 400

0.00

00.

005

0.01

00.

015

median and density(age)

N = 19 Bandwidth = 9.877

Dens

ity

median 37

Page 14: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

0 100 200 300 400

0.00

00.

005

0.01

00.

015

density(age) and robust normal curve

N = 19 Bandwidth = 9.877

Dens

ity

CENTER/DISPERSION(ROBUST)

ages of employees (US)12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 68 450

median 37

MAD 22.239

Page 15: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SUBTLER PROBLEMS

12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 68 450

Page 16: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SUBTLER PROBLEMS

12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 110 450

Page 17: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

0 100 200 300 400

0.00

00.

005

0.01

00.

015

density(age)

N = 19 Bandwidth = 9.877

Dens

ity

SUBTLER PROBLEMS

12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 110 450

Page 18: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

0 100 200 300 400

0.00

00.

005

0.01

00.

015

density(age)

N = 19 Bandwidth = 9.877

Dens

ity

SUBTLER PROBLEMS

12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 110 450

Maskingm a g n i t u d e o f o n e outlier masks smaller outliers

makes manual removal of outliers tricky

Page 19: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

Robust stats:handle multiple outliers

robust w.r.t. magnitude of outliers

0 100 200 300 400

0.00

00.

005

0.01

00.

015

density(age)

N = 19 Bandwidth = 9.877

Dens

ity

SUBTLER PROBLEMS

12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 110 450

Page 20: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ROBUSTNESS: INTUITION

handle multiple outliersrobust to magnitude of an outlier

Page 21: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

HOW ROBUST IS ROBUST?

Breakdown Point measures robustness of an estimator

proportion of “dirty” data the estimator can handle before giving an arbitrarily erroneous result

think adversarially

best possible breakdown point: 50%beyond 50% “noise”, what’s the “signal”?

Page 22: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SOME BREAKDOWN POINTS

mean?

mode?

standard deviation?

Page 23: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SOME ROBUST CENTERS

medianvalue that evenly splits set/distribution into higher and lower halves

k% trimmed meanremove lowest/highest k% valuescompute mean on remainder

k% winsorized meanremove lowest/highest k% valuesreplace low removed with lowest remaining valuereplace high removed with highest remaining valuecompute mean on resulting set

12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 110 450

Page 24: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SOME ROBUST CENTERS

median (37)value that evenly splits set/distribution into higher and lower halves

k% trimmed meanremove lowest/highest k% valuescompute mean on remainder

k% winsorized meanremove lowest/highest k% valuesreplace low removed with lowest remaining valuereplace high removed with highest remaining valuecompute mean on resulting set

12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 110 450

Page 25: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SOME ROBUST CENTERS

median (37)value that evenly splits set/distribution into higher and lower halves

k% trimmed mean (37.933)remove lowest/highest k% valuescompute mean on remainder

k% winsorized meanremove lowest/highest k% valuesreplace low removed with lowest remaining valuereplace high removed with highest remaining valuecompute mean on resulting set

12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 110 450

Page 26: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SOME ROBUST CENTERS

median (37)value that evenly splits set/distribution into higher and lower halves

k% trimmed mean (37.933)remove lowest/highest k% valuescompute mean on remainder

k% winsorized mean (37.842)remove lowest/highest k% valuesreplace low removed with lowest remaining valuereplace high removed with highest remaining valuecompute mean on resulting set

14 14 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 61 61

Page 27: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ROBUST CENTER BREAKDOWN POINTS

median?

k% trimmed/winsorized mean?

k ~= 50% ?

Page 28: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ROBUST DISPERSION (1D)

interquartile range (IQR) difference between 25% and 75% quartiles

MAD: Median Absolute Deviation

breakdown points?note for symmetric distributions:

MAD is IQR/2 away from median

12 13 14 21 22 26 33 35 36 37 39 42 45 47 54 57 61 68 450

median(|Yi − Y |) where Y = median(Y )

Page 29: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ROBUSTLY FIT A NORMAL

base case: Standard Normal symmetric, center at 0

MAD: 75 %ile

so estimate std dev in terms of MAD

center at median and off you go!

σ = 1.4826 · MAD

0 100 200 300 400

0.00

00.

005

0.01

00.

015

density(age)

N = 19 Bandwidth = 9.877

Dens

ity

Page 30: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SCALABLE IMPLEMENTATION

our metrics so far: Order Statisticsposition in value order

non-trivial to scale up to big databut there are various tricks

Page 31: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SQL FOR MEDIAN?

Page 32: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SQL FOR MEDIAN?

-- A naive median query SELECT c AS median FROM T WHERE (SELECT COUNT(*) from T AS T1 WHERE T1.c < T.c) = (SELECT COUNT(*) from T AS T2 WHERE T2.c > T.c)

Page 33: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SQL FOR MEDIAN?[Rozenshtein, Abramovich, Birger 1997]

SELECT c as median FROM T x, T y GROUP BY x.c HAVING SUM(CASE WHEN y.c <= x.c THEN 1 ELSE 0 END) >= (COUNT(*)+1)/2 AND SUM(CASE WHEN y.c >= x.c THEN 1 ELSE 0 END) >= (COUNT(*)/2)+1

Page 34: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SORT-BASED SQL FOR MEDIAN

Page 35: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

EFFICIENT APPROXIMATIONS

one-pass, limited memory Median/QuantileManku, et al., SIGMOD 1998Greenwald/Khanna, SIGMOD 2001keep certain exemplars in memory (with weights)

bag of exemplars used to approximate median

Hsiao, et al 2009: one-pass approximate MADbased on Flajolet-Martin “COUNT DISTINCT” sketchesa Proof Sketch: distributed and verifiable!

natural implementationsSQL: user-defined aggHadoop: Reduce function

Page 36: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SQL FOR APPROXIMATE MEDIAN

given: UDF “approx_median”

Page 37: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ORDER STATISTICS

methods so far: “L-estimators”linear (hence “L”) combinations of order statistics

simple, intuitive

well-studied for big datasets

but fancier stuff is popular in statisticse.g. for multivariate dispersion, robust regression...

Page 38: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

M-ESTIMATORS

widely used class

based on Maximum Likelihood Estimators (MLEs)

MLE: maximize (minimize )

M-estimators generalize to minimizewhere ρ is chosen carefully

nice if dρ/dy goes up near origin,decreasing to 0 far from origin

redescending M-estimators

n∏

i=1

f(xi)n∑

i=1

− log f(xi)n∑

i=1

ρ(xi)

Page 39: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

STUFF IN THE PAPER

No time today for outliers in:indexes (e.g. inflation) and rates (e.g. car speed)

textbook stuff for non-robust case, robustification seems open

timeseriesa relatively recent topic in the stat and DB communities

non-normalitymultimodal, power-series (zipf) distributionsFrequency Spectrum

m

V m

050

100

150

200

250

300

350

−50 0 50 100 150 200

0.00

00.

002

0.00

40.

006

density(age)

N = 34 Bandwidth = 23.53

Dens

ity

Page 40: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

TODAY

backgroundoutliers and robust statisticsmultivariate settingsresearch directions

Page 41: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

MOVING TO MULTIPLE DIMENSIONS

intuition: multivariate normalcenter: multidimensional meandispersion: ?

Page 42: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

MOVING TO MULTIPLE DIMENSIONS

intuition: multivariate normalcenter: multidimensional meandispersion: ?

Page 43: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

(SAMPLE) COVARIANCE

dxd matrix for N d-dimensional points

propertiessymmetric

diagonal is independent variance per dimension

off-diagonal is (roughly) correlations

qij =1

N − 1

N∑

k=1

(xik − xi)(xkj − xj)

Page 44: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

MULTIVARIATE DISPERSION

Mahalanobis distance of vector x from mean µ:

where S is the covariance matrix

Not robust!

Simple SQL in 2d, much harder in >2drequires matrix inversion!

√(x− µ)T S−1(x− µ)

Page 45: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ROBUST MULTIVARIATE OUTLIERS

proposed Heuristics:iteratively trim max-Mahalanobis point.rescale units component-wise, then use Euclidean threshholds

robust estimators for mean/covariancethis gets technical, e.g. Minimum Volume Ellipsoid (MVE)scale-up of these methods typically open

depth-based approaches“stack of oranges”: Convex hull peeling depthothers...

Page 46: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

TIME CHECK

time for distance-based outlier detection?

Page 47: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

DISTANCE-BASED OUTLIERS

non-parametricvarious metrics:

p a (k, D)-outlier if at most k other points lie within D of p [Kollios, et al., TKDE 2003]p an outlier if % of objects at large distance is high [Knorr/Ng, ICDE 1999]top n elements in distance to their kth nearest neighor [Ramaswamy, et al. SIGMOD 2000]

accounting for variations in cluster densityaverage density of the node’ neigborhood w.r.t. density of nearest neighbors’ neighborhoods[Breunig, et al, SIGMOD 2000]

Page 48: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ASSESSING DISTANCE-BASED METHODS

descriptive statisticsno probability densities, so no expectations, predictions

distance metrics not scale-invariantcomplicates usage in settings where data or units not well understood

Page 49: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

TODAY

backgroundoutliers and robust statisticsmultivariate settingsresearch directions

Page 50: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

RESEARCH DIRECTIONS

open problems in scalingnew agenda: intelligent forms

Page 51: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

SOME OPEN ISSUES

scalable MAD

robustly cleaning large, non-normal datasets

scalable, robust multivariate dispersionscalable matrix inversion for Mahalanobis (already done?)

Minimum-Volume Ellipsoid (MVE)?

scale-invariant distance-based outliers?

Page 52: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

OK, THAT WAS FUN

now let’s talk about filling out forms.

joint work ... with kuang chen, tapan parikh and others

Page 53: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

DATA ENTRY

repetitive, tedious, unglamorousoften contracted out to low-paid employees

often “in the way” of more valuable content

the topic of surprisingly little CS researchcompare, for example, to data visualization!

http

://w

ww

.flic

kr.c

om/p

hoto

s/22

6468

23@

N08

/307

0394

453/

Page 54: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

DATA ENTRY!

the first & best place to improve data qualityopportunity to fix the data at the source

.. rich opportunity for new data cleaning researchwith applications for robust (multidimensional) outlier detection!

synthesis of DB, HCI, survey design

reform the form!

http://www.flickr.com/photos/zarajay/459002147/

Page 55: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

BEST PRACTICES(FROM OUTSIDE CS)

survey design literaturequestion wording, ordering, grouping, encoding, constraints, cross-validation

double-entryfollowed by supervisor arbitration

can these inform forms?push these ideas back to point of data entrycomputational methods to improve these practices

http

://w

ww

.flic

kr.c

om/p

hoto

s/48

6001

0164

1@N

01/3

1692

1200

/

Page 56: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

DATA COLLECTION IN LOW-RESOURCE SETTINGS

lack of resources andexpertisetrend towards mobile data collection

opportunity for intelligent, dynamic forms

though well-funded orgs often have bad forms toodeterministic and unforgivinge.g. the spurious integrity problem

time for automated and more statistical approachinformed by human factors

Page 57: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

PROPOSED NEW DATA ENTRY RULES

feedback, not enforcementinterface friction

inversely proportional to likelihood

a role for data-driven probabilities during data entryannotation should be easier than subversion

friction merits explanationrole for data visualization during data entrygather good evidence while you can!

theme: forms need the database and vice versa

Page 58: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

FEEDBACK WIDGETS

a simple examplethe point: these need not be exotic

Page 59: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

a simple examplethe point: these need not be exotic

FEEDBACK WIDGETS

Page 60: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

a simple examplethe point: these need not be exotic

FEEDBACK WIDGETS

Page 61: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

a simple examplethe point: these need not be exotica pure application of simple robust stats!

FEEDBACK WIDGETS

Page 62: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

REQUIRES MULTIVARIATE MODELING

age:

favorite drink:

this is harder to manage

computationally, and from HCI angle

          

Page 63: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

REQUIRES MULTIVARIATE MODELING

age:

favorite drink:

this is harder to manage

computationally, and from HCI angle

MilkApple Juice

AbsynthApple JuiceArakBrandy

4         

Page 64: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

QUESTION ORDERING!

greedy information gainenables better form feedback

accounts for attention span

curbstoning

http

://w

ww

.flic

kr.c

om/p

hoto

s/25

2579

46@

N03

/268

6015

697/

Page 65: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

REASKING AND REFORMULATION

need joint data model and error modelrequires some ML sophistication

error model depends on UIwill require some HCI sophistication

reformulation can be automated:e.g. quantization:

1. adult/child

2. age

still conforming to ordering constraints imposed by the form

designer.

Form designers may also want to specify other forms of

constraints on form layout, such as a partial ordering over

the questions that must be respected. The greedy approach

can accommodate such constraints by restricting the choice of

fields at every step to match the partial order.

A. Reordering Questions during Data Entry

In electronic form settings, we can take our ordering notion

a step further, and dynamically reorder questions in a form as

they are entered. This approach can be appropriate for scenar-

ios when data entry workers input one value at a time, such

as on small mobile devices. We can apply the same greedy

information gain criterion as in Algorithm 1, but update the

calculations with the actual responses to previous questions.

Assuming questions G = {F1, . . . , Fi} have already beenfilled in with values g = {f1, . . . , fn}, the next question isselected by maximizing:

H(Fi | G = g)

= −∑

fi

P (Fi = fi | G = g) log P (Fi = fi | G = g). (7)

Notice that this objective is the same as Equation 4, except

using the actual responses entered into previous questions,

rather than taking a weighted average over all possible values.

Constraints specified by the form designer, such as topical

grouping, can also be respected in the dynamic framework by

restricting the selection of next questions at every step.

In general, dynamic reordering can be particularly useful in

scenarios where the input of one value determines the value

of another. For example, in a form with questions for gender

and pregnant, a response of male for the former dictates the

value and potential information gain of the latter. However,

dynamic reordering presents a drawback in that it may confuse

data entry workers who routinely enter information into the

same form, and have come to expect a specific question order.

Determining the tradeoff between these opposing concerns is

a human factors issue that depends on both the application

domain and the user interface employed.

VI. QUESTION RE-ASKING

After a form instance is entered, the probabilistic model is

again applied for the purpose of identifying errorsmade during

entry. Because this determination is made immediately after

form submission, USHER can choose to re-ask questions for

which there may be an error. By focusing the re-asking effort

only on questions that were likely to be mis-entered, USHER is

likely to catch mistakes at a small incremental cost to the data

entry worker. Our approach is a data-driven alternative to the

expensive practice of double-entry, where every question is re-

asked — we focus re-asking effort only on question responses

that are unlikely with respect to the other form responses.

USHER estimates a contextualized error likelihood for each

question response, i.e., a probability of error that is dependent

on every other field response. The intuition behind error

!

"#

!"# $#

%#& '

#

$%

$&

(

'(!()

Fig. 5. A graphical model with explicit error modeling. Here, Di representsthe actual input provided by the data entry worker for the ith question,and Fi is the correct unobserved value of that question that we wish topredict. The rectangular plate around the center variables denotes that thosevariables are repeated for each of the N form questions. The F variablesare connected by edges z ∈ Z, representing the relationships discovered inthe structure learning process; this is the same structure used for the questionordering component. Variable θi represents the “error” distribution, which inour current model is uniform over all possible values. Variable Ri is a hiddenbinary indicator variable specifying whether the entered data was erroneous;its probability λi is drawn from a Beta prior with fixed hyperparameters αand β.

detection is straightforward: questions whose responses are

“unexpected,” with respect to the rest of the input responses,

are more likely to be incorrect.

To formally incorporate this notion, we extend our Bayesian

network from Section IV using a more sophisticated model

that ties together intended and actual question responses.

Specifically, each question is augmented with additional nodes

capturing a probabilistic view of entry error. Under this

new representation, the ith question is represented with thefollowing set of random variables:

• Fi: the correct value for the question, which is unknown

to the system, and thus a hidden variable.

• Di: the question response provided by the data entry

worker, an observed variable.

• θi: the probability distribution of values that are entered

as mistakes, which is a single fixed distribution per

question. We call θi the error distribution.

• Ri: a binary variable specifying whether an error was

made in this question.

Additionally, we introduce a random variable λ shared acrossall questions, specifying how likely errors are to occur for

a typical question of that form submission. Note that the

relationships between field values discovered during structure

learning are still part of the graph, so that error detection is

based on the totality of form responses. We call the Bayesian

network augmented with these additional random variables the

error model.

As before, the Fi random variables are connected according

to the learned structure explained in Section IV. Within

Page 66: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

USHER

learn a graphical model of all form variables, learn error model

structure learning & parameters

optimize flexible aspects of formgreedy information gain principle for question ordering

subject to designer-provided constraints

dynamically parameterize during form filling

decorate widgetsreorder, reask/reformulate questions

!"#$%&'()*(+,)"-

.#"/+/)0)1,)(2"3'0

%'#4'#56578

9"#$5*'031&#)"#53+,+

':&'(,'3;5'##"#

0)<'0)=""31

)-&>,54+0>'1!

"

#

$

Fig. 4. USHER components and data flow: (1) model a form and its data,(2) generate question ordering according to greedy information gain, (3)instantiate the form in a data entry interface, (4) during and immediatelyafter data entry, provide dynamic re-ordering, feedback and re-confirmationaccording to contextualized error likelihood.

one is based on a modified version of JavaBayes [17], an

open source Java software for Bayesian inference. Because

JavaBayes only supports discrete probability variables, we

implemented the error prediction version of our model using

Infer.NET [18], a Microsoft .NET Framework toolkit for

Bayesian inference.

IV. LEARNING A MODEL FOR DATA ENTRY

The core of the USHER system is its probabilistic model of

the data, represented as a Bayesian network over form ques-

tions. This network captures relationships between different

question elements in a stochastic manner. In particular, given

input values for some subset of the questions of a particular

form instance, the model can infer probability distributions

over values of that instance’s remaining unanswered questions.

In this section, we show how standard machine learning

techniques can be used to induce this model from previous

form entries.

We will use F = {F1, . . . , Fn} to denote a set of randomvariables representing the values of n unknown questions

comprising a data entry form. We assume that each question

response takes on a finite set of discrete values; continuous

values can be discretized by dividing the data range into

intervals and assigning each interval one value.2 To learn the

probabilistic model, we assume access to prior entries for the

same form.

USHER first builds a Bayesian network over the form ques-

tions, which will allow it to compute probability distributions

over arbitrary subsets G of form question random variables,

given already entered question responses G′ = g′ for that

instance, i.e., P (G | G′ = g′). Constructing this networkrequires two steps: first, the induction of the graph structure

of the network, which encodes the conditional independencies

2Our present formulation ignores dependencies between ordinal values;modeling such relationships is an important direction of future work.

between the question random variables F; and second, the

estimation of the resulting network’s parameters.

The naıve approach to structure selection would be to

assume complete dependence of each question on every other

question. However, this would blow up the number of free

parameters in our model, leading to both poor generalization

performance of our predictions, and prohibitively slow model

queries. Instead, we learn the structure using the prior form

submissions in the database. In our implementation, we use the

BANJO software [16] for structure learning, which searches

through the space of possible structures using simulated

annealing, and chooses the best structure according to the

Bayesian Dirichlet Equivalence criterion [19]. Figures 1 and

2 show example automatically learned structures for two data

domains.3

Note that in certain domains, form designers may already

have strong common sense notions of questions that should be

related (e.g., education level and income). As a postprocessing

step, the form designer can manually tune the resulting model

to incorporate such intuitions. In fact, the entire structure

could be manually constructed in domains where an expert

has comprehensive prior knowledge of the questions’ interde-

pendencies.

Given a graphical structure of the questions, we can then

estimate the conditional probability tables that parameterize

each node in a straightforward manner, by counting the

proportion of previous form submissions with those response

assignments. The probability mass function for a single ques-

tion Fi with m possible discrete values, conditioned on its set

of parent nodes G from the Bayesian network, is:

P (Fi = fi | {Fj = fj : Fj ∈ G})

=N(Fi = fi, {Fj = fj : Fj ∈ G})

N({Fj = fj : Fj ∈ G}). (1)

In this notation, P (Fi = fi | {Fj = fj : Fj ∈ G}) refersto the conditional probability of question Fi taking value fi,

given that each question Fj in set G takes on value fj . Here,

N(X) is the number of prior form submissions that match theconditions X — in the denominator, we count the number of

times a previous submission had the subset G of its questions

set according to the listed fj values; and in the numerator, we

count the number of times when those previous submissions

additionally had Fi set to fi.

Because the number of prior form instances may be limited,

and thus may not account for all possible combinations of prior

question responses, equation 1 may assign zero probability to

some combinations of responses. Typically, this is undesir-

able; just because a particular combination of values has not

occurred in the past does not mean that combination cannot

occur at all. We overcome this obstacle by smoothing these

parameter estimates, interpolating each with a background

3It is important to note that the arrows in the network do not representcausality, only that there is a probabilistic relationship between the questions.

Page 67: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

EXAMPLE WIDGETS

Fig. 8. Mockups of some simple dynamic data entry widgets illustrating various design options.

ACKNOWLEDGMENTS

The authors thank Yaw Anokwa, Michael Bernstein, S.R.K.

Branavan, Tyson Condie, Heather Dolan, Max van Kleek,

Neal Lesh, Alice Lin, Jamie Lockwood, Bob McCarthy, Tom

Piazza, and Christine Robson for their helpful comments and

suggestions.

REFERENCES

[1] T. Dasu and T. Johnson, Exploratory Data Mining and Data Cleaning.Wiley Series in Probability and Statistics, 2003.

[2] C. Batini and M. Scannapieco, Data Quality: Concepts, Methodologiesand Techniques. Springer, 2006.

[3] R. M. Graves, F. J. Fowler, M. P. Couper, J. M. Lepkowski, E. Singer,and R. Tourangeau, Survey Methodology. Wiley-Interscience, 2004.

[4] J. Lee, “Address to who staff,” http://www.who.int/dg/lee/speeches/2003/21 07/en, World Health Organization, 2003.

[5] J. V. D. Broeck, M. Mackay, N. Mpontshane, A. K. K. Luabeya,M. Chhagan, and M. L. Bennish, “Maintaining data integrity in a ruralclinical trial,” Controlled Clinical Trials, 2007.

[6] R. McCarthy and T. Piazza, “Personal interview,” University of Califor-nia at Berkeley Survey Research Center, 2009.

[7] S. Patnaik, E. Brunskill, and W. Thies, “Evaluating the accuracy of datacollection on mobile phones: A study of forms, sms, and voice,” 2009.

[8] J. M. Hellerstein, “Quantitative data cleaning for large databases,”United Nations Economic Commission for Europe (UNECE), 2008.

[9] K. Z. Gajos, D. S. Weld, and J. O. Wobbrock, “Decision-theoretic userinterface generation,” in AAAI’08. AAAI Press, 2008, pp. 1532–1536.

[10] A. Ali and C. Meek, “Predictive models of form filling,” MicrosoftResearch, Tech. Rep. MSR-TR-2009-1, Jan. 2009.

[11] Y. Yu, J. A. Stamberger, A. Manoharan, and A. Paepcke, “Ecopod: amobile tool for community based biodiversity collection building,” inJCDL, 2006.

[12] S. Day, P. Fayers, and D. Harvey, “Double data entry: what value, whatprice?” Controlled Clinical Trials, 1998.

[13] D. W. King and R. Lashley, “A quantifiable alternative to double dataentry,” Controlled Clinical Trials, 2000.

[14] K. Kleinman, “Adaptive double data entry: a probabilistic tool forchoosing which forms to reenter,” Controlled Clinical Trials, 2001.

[15] K. L. Norman, “Online survey design guide,” http://lap.umd.edu/survey design.

[16] A. Hartemink, “Banjo: Bayesian network inference with java objects,”http://www.cs.duke.edu/∼amink/software/banjo.

[17] F. G. Cozman, “Javabayes - bayesian networks in java,” http://www.cs.cmu.edu/∼javabayes.

[18] Microsoft Research Cambridge, “Infer.net,” http://research.microsoft.com/en-us/um/cambridge/projects/infernet.

[19] D. Heckerman, D. Geiger, and D. M. Chickering, “Learning bayesiannetworks: The combination of knowledge and statistical data,” MachineLearning, vol. 20, no. 3, pp. 197–243, 1995.

[20] F. Jelinek and R. L. Mercer, “Interpolated estimation of markov sourceparameters from sparse data,” in Proceedings of the Workshop on PatternRecognition in Practice, 1980.

[21] C. M. Bishop, Pattern Recognition and Machine Learning. Springer,2007.

[22] J. M. Bernardo and A. F. Smith, Bayesian Theory. Wiley Series inProbability and Statistics, 2000.

[23] T. P. Minka, “Expectation propagation for approximate bayesian infer-ence,” in Proceedings of the Conference in Uncertainty in Artificial

Intelligence, 2001.[24] U. C. Berkeley, “Survey documentation and analysis,” http://sda.

berkeley.edu.[25] W. Willett, “Scented widgets: Improving navigation cues with embed-

ded visualizations,” IEEE Transactions on Visualization and Computer

Graphics, 2007.

reduced friction,

likelihoodhints

post-hocassessment

reduced friction

explicit probabilities

Page 68: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

INITIAL ASSESSMENTS

Tanzanian HIV/AIDS forms, US political surveySimulation shows significant benefits

both in reordering and reasking models

User study in the works0 1 2 3 4 5 6 7 8 9 1011121314150

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of questions re!asked

% s

uccessfu

l tr

ials

Survey dataset, error prob = 0.05

0 1 2 3 4 5 6 7 8 9 1011121314150

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of questions re!asked

% s

uccessfu

l tr

ials

Survey dataset, error prob = 0.1

0 1 2 3 4 5 6 7 8 9 1011121314150

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of questions re!asked

% s

uccessfu

l tr

ials

Survey dataset, error prob = 0.2

0 1 2 3 4 5 6 7 8 90

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of questions re!asked

% s

uccessfu

l tr

ials

Patient dataset, error prob = 0.05

0 1 2 3 4 5 6 7 8 90

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of questions re!asked

% s

uccessfu

l tr

ials

Patient dataset, error prob = 0.1

0 1 2 3 4 5 6 7 8 90

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of questions re!asked

% s

uccessfu

l tr

ials

Patient dataset, error prob = 0.2

Usher

Random

Fig. 7. Results of the re-asking simulation experiment. In each case, the x-axis measures how many questions we are allowed to re-ask, and the y-axismeasures whether we correctly identify all erroneous questions within that number of re-asks. The error probability indicates the rate at which we simulateerrors in the original data. Results for the survey data are shown at top, and for the HIV/AIDS data at bottom.

and applies it to interactive data entry via question order-

ing and re-asking. This raises questions about the human-

computer interactions inherent in electronic form-filling, which

are typically device- and application-dependent. For example,

in one of our applications, we are interested in how data

quality interactions play out on mobile devices in developing

countries, as in the Tanzanian patient forms we examined

above. But similar questions arise in traditional online forms

like web surveys. In this section we outline some broad design

considerations that arise from the probabilistic power of the

models and algorithms in Usher. We leave the investigation of

specific interfaces and their evaluation in various contexts to

future work.

While an interactive Usher-based interface is presenting

questions (either one-by-one or in groups), it can infer a

probability for each possible answer to the next question; those

probabilities are “contextualized” (conditioned) by previous

answers. The resulting quantitative probabilities can be ex-

posed to users in different manners and at different times. We

taxonomize some of these design options as follows:

1) Time of exposure: friction and assessment. The prob-

ability of an answer can be exposed in an interface

before the user chooses their answer. This is often

done to improve user speed by adjusting the friction

of entering different answers: likely results become

easy or attractive to enter, unlikely results require more

work. Examples of data-driven variance in friction in-

clude type-ahead mechanisms in textfields, and “popular

choice” items repeated at the top of drop-down lists,

and direct decoration (e.g. coloring or font-size) of each

choice in accordance with its probability. A downside

of this “beforehand” exposure of answer probabilities is

the potential to bias answers. Alternatively, probabilities

may be exposed in the interface only after the user

selects an answer. This becomes a form of assessment,

for example by flagging unlikely choices as potential

outliers. This assessment can be seen as a “soft” proba-

bilistic version of the constraint violation visualizations

commonly found in web forms (e.g. the “red star” that

often shows up next to forbidden or missing entries).

Post-hoc assessment arguably has less of a biasing affect

than friction. This is both because users choose initial

answers without knowledge of the model’s predictions,

and because users may be less likely to modify previous

answers than they would be to change their minds before

entry.

2) Explicitness of exposure: Feedback mechanisms in

adaptive interfaces vary in terms of how explicitly they

intervene in the user’s task. Gajos et al. proposed elective

versus mandatory adaptations as an axis to consider. For

instance, a combo-box that sorts its values based on

likelihood is mandatory with a high level of friction;

whereas a combo-box with a split-menu, mentioned

above, is elective — the user can choose to ignore

the popular choices and do a traditional alphabetical

search through the list. Another important consideration

1 2 3 4 5 6 7 8 9 10 11 12 13 140.2

0.3

0.4

0.5

0.6

0.7

0.8

Number of inputted fields

% r

em

ain

ing fie

lds p

redic

ted Survey Dataset

1 2 3 4 5 6 7 8 9 10 11 12 13 140

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Number of inputted fields

All

rem

ain

ing fie

lds p

redic

ted

Survey Dataset

1 2 3 4 5 6 7 80.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Number of inputted fields

% r

em

ain

ing fie

lds p

redic

ted Patient Dataset

1 2 3 4 5 6 7 80

0.10.20.30.40.50.60.70.80.9

1

Number of inputted fields

All

rem

ain

ing fie

lds p

redic

ted

Patient Dataset

Dynamic Reordering

Static Ordering

Original Ordering

Random

Fig. 6. Results of the ordering simulation experiment. In each case, the x-axis measures how many questions are filled before the submission is truncated.In the figures at the left, the y-axis plots the average proportion of remaining question whose responses are predicted correctly. In the figures at the right, they-axis plots the proportion of form instances for which all remaining questions are predicted correctly. Results for the survey data are shown at top, and forthe HIV/AIDS data at bottom.

designer’s ordering, and a random ordering. In each case, pre-

dictions are made by computing the maximum position (mode)

of the probability distribution over un-entered questions, given

the known responses. Results are averaged over each instance

in the test set.

The left-hand graphs of Figure 6 measures the average

number of correctly predicted unfilled questions, as a function

of how many responses the data entry worker did enter before

being interrupted. In each case, the USHER orderings are able

to predict question responses with greater accuracy than both

the original form ordering and a random ordering for most

truncation points. Similar relative performance is exhibited

when we measure the percentage of test set instances where

all unfilled questions are predicted correctly, as shown in the

right side of Figure 6.

The original form orderings tend to underperform their

USHER counterparts; human form designers typically do not

optimize for asking the most difficult questions first, instead

often focusing on boilerplate material at the beginning of a

form. Such design methodology does not optimize for greedy

information gain.

As expected, between the two USHER approaches, the

dynamic ordering yields slightly greater predictive power than

the static ordering. Because the dynamic approach is able

to adapt the form to the data being entered, it can focus

its question selection on high-uncertainty questions specific

to the current form instance. In contrast, the static approach

effectively averages over all possible uncertainty paths.

2) Re-asking: For the re-asking experiment, our hypothet-

ical scenario is one where the data entry worker enters a

complete form instance, but possibly with erroneous values

for some question responses. Specifically, we assume that for

each data value, the entry worker has some fixed chance p ofmaking a mistake. When a mistake occurs, we assume that

an erroneous value is chosen uniformly at random. Once the

entire instance is entered, we feed the entered values to our

error model, and compute the probability of error for each

question. We then re-ask the questions with the highest error

probabilities, and measure whether we chose to re-ask the

questions that were actually wrong. Results are averaged over

10 random trials for each test set instance.

Figure 7 plots the percentage of instances where we choose

to re-ask all of the erroneous questions, as a function of the

number of questions that are re-asked, for error probabilities

of 0.05, 0.1, and 0.2. In each case, our error model is able

to make significantly better choices about which questions to

re-ask than a random baseline. In fact, for p = 0.05, whichis a representative error rate that is observed in the field [7],

USHER successfully re-asks all errors over 80% of the time

within the first three questions in both data sets. We observe

that the traditional approach of double entry corresponds to

re-asking every question; under reasonable assumptions about

the occurrence of errors, our model is often able to achieve

the same result as double entry (of identifying all erroneous

responses) at a substantially reduced cost, in terms of number

of questions asked.

VIII. DISCUSSION: DYNAMIC INTERFACES FOR DATA

ENTRY

In the sections above, we saw how Usher takes statistical

information traditionally associated with offline data cleaning,

Page 69: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

CONCLUSIONS

DB community has much to learn about quantitative data cleaning

e.g. robust statisticsand much to offer

scalability, end-to-end view of data lifecycle

note: everything is “quantitative”we live in an era of big data and statistics!

work across fields, build tools!DB, stats, HCI, org mgmt, ...

Page 70: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ADDITIONAL READING

Exploratory Data Mining and Data Cleaning, Tamraparni Dasu and Theodore Johnson, Wiley, 2003.Robust Regression and Outlier Detection, Peter J. Rousseeuw and Annick M. Leroy, Wiley 1987.“Data Streams: Algorithms and Applications”. S. Muthukrishnan. Foundations and Trends in Theoretical Computer Science 1(1), 2005.Exploratory Data Analysis, John Tukey, Addison-Wesley, 1977.Visualizing Data. William S. Cleveland. Hobart Press, 1993.

Page 71: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

WITH THANKS TO...

Steven ValeUN Economic Council for Europe

Sara Wood, PLOSthe Usher team:

Kuang Chen, Tapan Parikh, UC BerkeleyHarr Chen, MIT

Page 72: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

EXTRA GOODIES

Page 73: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

RESAMPLING: BOOTSTRAP & JACKNIFEcomputational solution to small or noisy data

sample, compute estimator, repeat

at end, average the estimators over the samples

recent work on scaling see MAD Skills talk Thursday

needs care: any bootstrap sample could have more outliers than breakdown point

note: turns data into a sampling distribution!

Page 74: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ASIDE 1: INDEXES

Rates of inflation over years1.03, 1.05, 1.01, 1.03, 1.06$10 at start = $11.926 at endwant a center metric µ so 10*µ^5 = $11.926

geometric mean:

sensitive to outliers near 0. breakdown pt 0%

n

√√√√(

n∏

i=1

ki

)

Page 75: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ASIDE 2: RATES

Average speed on a car trip50km@10kph, 50km@50kphtravel 100km in 6 hours“average” speed 100km/6hr = 16.67kph

harmonic mean:

reciprocal of reciprocal of ratessensitive to very large outliersbreakdown point: 0%

n∑ni=1

1ki

Page 76: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

ROBUSTIFYING THESE

Can always trimWinsorizing requires care

weight of “substitute” depends on its value other proposals for indexes (geometric mean)

100%1/2 the smallest measurable value

Useful fact about meansharmonic <= geometric <= arithmeticcan compute (robust version of) all 3 to get a feel

Page 77: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

NON-NORMALITY

Not everything is normalMultimodal distributions

Cluster before looking for outliers

Power Laws (Zipfian)Easy to confuse with normal data+ a few frequent outliers

Nice blog post by Panos Ipeirotis

Various normality testsdip statistic is a robust test

Q-Q plots against normal good for intuition

Frequency Spectrum

m

V m

050

100

150

200

250

300

350

−50 0 50 100 150 200

0.00

00.

002

0.00

40.

006

density(age)

N = 34 Bandwidth = 23.53

Dens

ity

Page 78: QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASESdb.cs.berkeley.edu/jmh/talks/qdb09.pdf · QUANTITATIVE DATA C L E A N I N G F O R LARGE DATABASES JOSEPH M. HELLERSTEIN XXXXX

NON-NORMAL. NOW WHAT?

assume normality anyhowconsider likely false positives, negatives

model data, look for outliers in residualsoften normally distributed if sources of noise are i.i.d.

partition data, look in subsetsmanual: data cubes, Johnson/Dasu’s data spheresautomatic: clustering

non-parametric outlier detection methodsa few slides from now...

Frequency Spectrum

m

V m

050

100

150

200

250

300

350

−50 0 50 100 150 200

0.00

00.

002

0.00

40.

006

density(age)

N = 34 Bandwidth = 23.53

Dens

ity