07 statistical approaches to randomization

Post on 16-Feb-2017

74 Views

Category:

Science

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Random Graphs & Graph Randomization Procedures

Measuring Networks: ConnectivityReachability x Volume Phase Transitions

Random Graphs & Graph Randomization 1) Intro: Purpose?2) Basic Random Graphs

1) Erdos Random Graphs2) Degree Constrained3) General constraints: set of all graphs that…4) Algorithmic approaches

3) Random graph applications1) Connectivity2) Small Worlds3) Triad Distributions4) Simulations

4) Measurement uncertainty1) Bootstrap SEs

5) Permutation Models1) QAP2) Peer Influence Models

6) Latent Space Models

Introduction to Random & StochasticWhy random graphs?

Inference:• Network inference differs from many of the inference problems we are used to.

• We have the population (by assumption)• Want to know what the process underlying network formation might be• Random graphs thus create one (reasonable?) comparison group.

• Common association tests (correlations, regressions, etc.) assume case independence; randomization provides a non-parametric way to evaluate statistical significance.

• Sampling: There are few well-established ways to partially sample a network; random graph tools are making that possible.

Introduction to Random & StochasticWhy random graphs?

Simulation:We often want to test measures, models or methods on a large collection of networks with known properties. • Purely random graphs have very well-known mathematical properties• By adding random information to networks with known properties, we can bridge

data-collection gaps

• Models are at the state now that we can often infer global network structure from network samples

Introduction to Random & StochasticSimple Random Graphs

Erdős-Renyi graphs

Simplest random graph: given a graph of n nodes, assume all edges have equal probability of being present.

Or

A graph chosen at random from the set of all graphs with N nodes and M edges.

Number of unique undirected graph patterns by number of nodes

Enumeration is impossible…so we use construction rules that ensure even probability of all graphs in the space.

* Note a subtle difference here: the G(N,P) model will have random variability in number of edges due to random chance…ignorable in limit of large networks.

In a Erdos random graph - each dyad has the same probability of being tied –so algorithm is a simple coin-flip on each dyad.*

degree will be Poisson distributed, and the nodes with high degree are likely to be at the intuitive center.

Introduction to Random & StochasticSimple Random Graphs

Simple Bernoulli graph with 1000 nodes and average degree=2.4 p=0.0024.

Introduction to Random & StochasticSimple Random Graphs

Network connectivity changes rapidly as a function of network volume.

In a purely random network, when the average degree is <1, the network is always disconnected. When it is >2, there is a “giant component” that takes up most of the network.

Note that this is dependent on mean degree, so applies to networks of any size.

Average Degree

Introduction to Random & StochasticSimple Random Graphs

Introduction to Random & StochasticSimple Random Graphs

Because random graphs are so well-known, we know exactly what expected values are for many features…

Compare randomly generated to expected

Introduction to Random & StochasticSimple Random Graphs

Because random graphs are so well-known, we know exactly what expected values are for many features…

Introduction to Random & StochasticLess Simple Random Graphs…

Simple random is a very poor model for real life, so not really a fair null. Imagine you know the mixing by category in a network, you can use that to generate a network that has correct probability by mixing category:

mixprob wht blk oth

wht .0096 .0016 .0065 blk .0013 .0085 .0045 oth .0054 .0045 .0067

…so generate a random graph with similar mixing probability

Observed

Introduction to Random & StochasticLess Simple Random Graphs…

Simple random is a very poor model for real life, so not really a fair null. Imagine you know the mixing by category in a network, you can use that to generate a network that has correct probability by mixing category:

mixprob wht blk oth

wht .0096 .0016 .0065 blk .0013 .0085 .0045 oth .0054 .0045 .0067

…so generate a random graph with similar mixing probability

Random

Introduction to Random & StochasticLess Simple Random Graphs…

Simple random is a very poor model for real life, so not really a fair null. Imagine you know the mixing by category in a network, you can use that to generate a network that has correct probability by mixing category:

mixprob wht blk oth

wht .0096 .0016 .0065 blk .0013 .0085 .0045 oth .0054 .0045 .0067

…so generate a random graph with similar mixing probability

Degree distributions don’t match

Simple random is a very poor model for real life, so not really a fair null. Imagine you know the mixing by category in a network, you can use that to generate a network that has correct probability by mixing category:

Introduction to Random & StochasticLess Simple Random Graphs…

We can condition on more features – degree distribution, dyad distribution, mixing…These can take us a long ways towards getting a reasonable null.

Some are easy: -fixing just the in-degree OR the out-degree random selection on row/col - fixing both in & out: a “zipper” method

- generate a set of half-edges for each node’s degree, randomly sort, put back together

Edge-matching random permutation

Can easily generate networks with appropriate degree distributions by generating “edge stems” and sorting:

aDegree:1: 22: 23: 1

b

di=1

c

c

di=2

d

d

f

f

di=3

f

(need to ensure you have a valid edge list!)

Introduction to Random & StochasticLess Simple Random Graphs…

Introduction to Random & StochasticLess Simple Random Graphs…

PAJEK gives you the unconditional expected values:------------------------------------------------------------------------------Triadic Census 2. i:\people\jwm\s884\homework\prison.net (67)------------------------------------------------------------------------------ Working...---------------------------------------------------------------------------- Type Number of triads (ni) Expected (ei) (ni-ei)/ei---------------------------------------------------------------------------- 1 - 003 39221 37227.47 0.05 2 - 012 5860 9587.83 -0.39 3 - 102 2336 205.78 10.35 4 - 021D 61 205.78 -0.70 5 - 021U 80 205.78 -0.61 6 - 021C 103 411.55 -0.75 7 - 111D 105 17.67 4.94 8 - 111U 69 17.67 2.91 9 - 030T 13 17.67 -0.26 10 - 030C 1 5.89 -0.83 11 - 201 12 0.38 30.65 12 - 120D 15 0.38 38.56 13 - 120U 7 0.38 17.46 14 - 120C 5 0.76 5.59 15 - 210 12 0.03 367.67 16 - 300 5 0.00 21471.04 ---------------------------------------------------------------------------- Chi-Square: 137414.3919*** 6 cells (37.50%) have expected frequencies less than 5. The minimum expected cell frequency is 0.00.

Introduction to Random & StochasticLess Simple Random Graphs…

We can calculate the (X|MAN) distributions:

Triad Census T TPCNT PU EVT VARTU STDDIF

003 39221 0.8187 0.8194 39251 427.69 -1.472012 5860 0.1223 0.1213 5810.8 1053.5 1.5156102 2336 0.0488 0.0476 2278.7 321.01 3.1954021D 61 0.0013 0.0015 70.949 67.37 -1.212021U 80 0.0017 0.0015 70.949 67.37 1.1027021C 103 0.0022 0.003 141.9 127.58 -3.444111D 105 0.0022 0.0023 112.39 103.57 -0.727111U 69 0.0014 0.0023 112.39 103.57 -4.264030T 13 0.0003 0.0001 3.4292 3.3956 5.1939030C 1 209E-7 239E-7 1.1431 1.1393 -0.134201 12 0.0003 0.0009 42.974 38.123 -5.017120D 15 0.0003 286E-7 1.3717 1.368 11.652120U 7 0.0001 286E-7 1.3717 1.368 4.8122120C 5 0.0001 573E-7 2.7433 2.7285 1.3662210 12 0.0003 442E-7 2.1186 2.1023 6.8151300 5 0.0001 549E-8 0.2631 0.2621 9.2522

Introduction to Random & StochasticLess Simple Random Graphs…

Network Sub-Structure: Triads

003

(0)

012

(1)

102

021D

021U

021C

(2)

111D

111U

030T

030C

(3)

201

120D

120U

120C

(4)

210

(5)

300

(6)

Intransitive

Transitive

Mixed

Introduction to Random & StochasticApplications

An Example of the triad census

Type Number of triads--------------------------------------- 1 - 003 21--------------------------------------- 2 - 012 26 3 - 102 11 4 - 021D 1 5 - 021U 5 6 - 021C 3 7 - 111D 2 8 - 111U 5 9 - 030T 3 10 - 030C 1 11 - 201 1 12 - 120D 1 13 - 120U 1 14 - 120C 1 15 - 210 1 16 - 300 1---------------------------------------Sum (2 - 16): 63

Introduction to Random & StochasticApplications

-100

0

100

200

300

400

t-val

ue

Triad Census DistributionsStandardized Difference from Expected

Data from Add Health

012 102 021D 021U 021C 111D 111U 030T 030C 201 120D 120U 120C 210 300

Introduction to Random & StochasticApplications

As with undirected graphs, you can use the type of triads allowed to characterize the total graph. But now the potential patterns are much more diverse

1) All triads are 030T:

A perfect linear hierarchy.

Introduction to Random & StochasticApplications

Cluster Structure, allows triads: {003, 300, 102}

M MN*

M MN*

N* N*N*

Eugene Johnsen (1985, 1986) specifies a number of structures that result from various triad configurations

1

1

11

Introduction to Random & StochasticApplications

PRC{300,102, 003, 120D, 120U, 030T, 021D, 021U} Ranked Cluster:

M MN*

M MN*

M

A*A*

A*A*

A*A*

A*A*

11

11

1

1111

011

11 0

0

00 0 0 0

0 00 0

And many more...

Introduction to Random & StochasticApplications

Substantively, specifying a set of triads defines a behavioral mechanism, and we can use the distribution of triads in a network to test whether the hypothesized mechanism is active.

We do this by (1) counting the number of each triad type in a given network and (2) comparing it to the expected number, given some random distribution of ties in the network.

See Wasserman and Faust, Chapter 14 for computation details (and I have code if you want) that will generate these distributions, if you so choose.

Introduction to Random & StochasticApplications

Triad:003

012

102

021D

021U

021C

111D

111U

030T

030C

201

120D

120U

120C

210

300

BA

Triad Micro-Models:BA: Ballance (Cartwright and Harary, ‘56) CL: Clustering Model (Davis. ‘67)RC: Ranked Cluster (Davis & Leinhardt, ‘72) R2C: Ranked 2-Clusters (Johnsen, ‘85)TR: Transitivity (Davis and Leinhardt, ‘71) HC: Hierarchical Cliques (Johnsen, ‘85)39+: Model that fits D&L’s 742 mats N :39-72 p1-p4: Johnsen, 1986. Process Agreement Models.

CL RC R2C TR HC 39+ p1 p2 p3 p4

Measuring NetworksTriads:

Structural Indices based on the distribution of triads

The observed distribution of triads can be fit to the hypothesized structures using weighting vectors for each type of triad.

llμlTl

T

T

)()(l

Where:l = 16 element weighting vector for the triad typesT = the observed triad census mT= the expected value of TT = the variance-covariance matrix for T

Introduction to Random & StochasticApplications

For the Add Health data, the observed distribution of the tau statistic for various models was:

Indicating that a ranked-cluster model fits the best.

Introduction to Random & StochasticApplications

Prosper

Introduction to Random & StochasticApplications

Travers and Milgram’s work on the small world is responsible for the standard belief that “everyone is connected by a chain of about 6 steps.”Two questions:Given what we know about networks, what is the longest path (defined by handshakes) that separates any two people?

Is 6 steps a long distance or a short distance?

Introduction to Random & StochasticApplications

If nobody’s contacts overlapped, we’d reach everyone very quickly. Six would be a large number.

If ties overlap at random…we’d reach each other almost as quickly. Six would still be a large number.

Is 6 steps a long distance or a short distance?

Introduction to Random & StochasticApplications

0

20%

40%

60%

80%

100%

Per

cent

Con

tact

ed

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Remove

Degree = 4Degree = 3

Degree = 2

Random Reachability:By number of close friends

Introduction to Random & StochasticApplications

0

0.2

0.4

0.6

0.8

1

Pro

po

rtio

n R

ea

ch

ed

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14Remove

"Pine Brook Jr. High"

Random graphObserved

Introduction to Random & StochasticApplications

Milgram’s test: Send a packet from sets of randomly selected people to a stockbroker in Boston

Random Boston

Random Nebraska

Boston Stockbrokers

Introduction to Random & StochasticApplications

Most chains found their way through a small number of intermediaries.

Understanding why this is true has been called the “Small-World Problem,” which has since been generalized to a much more formal understanding of tie patterns in large networks.

For purposes of flow through graphs, distance is a primary concern so long as transmission is uncertain.

Introduction to Random & StochasticApplications

Based on Milgram’s (1967) famous work, the substantive point is that networks are structured such that even when most of our connections are local, any pair of people can be connected by a fairly small number of relational steps.

Introduction to Random & StochasticApplications

Watts says there are 4 conditions that make the small world phenomenon interesting:

1) The network is large - O(Billions)2) The network is sparse - people are connected to a small fraction of

the total network3) The network is decentralized -- no single (or small #) of stars4) The network is highly clustered -- most friendship circles are

overlapping

Introduction to Random & StochasticApplications

Formally, we can characterize a graph through 2 statistics.

1) The characteristic path length, LThe average length of the shortest paths connecting any two actors. (note this only works for connected graphs)

2) The clustering coefficient, C• Version 1: the average local density. That is, Cv =

ego-network density, and C = Cv/n• Version 2: transitivity ratio. Number of closed triads

divided by the number of closed and open triads.

A small world graph is any graph with a relatively small L and a relatively large C.

Introduction to Random & StochasticApplications

The most clustered graph is Watt’s “Caveman” graph:

Compared to random graphs, C is large and L is long. The intuition, then, is that clustered graphs tend to have (relatively) long characteristic path lengths.

The small world phenomenon rests on the opposite: high clustering and short path distances.

How?

Introduction to Random & StochasticApplications

C=Large, L is Small = SW Graphs

Simulate networks with a parameter (a) that governs the proportion of ties that are clustered compared to the proportion that are randomly distributed across the network:

Introduction to Random & StochasticApplications

Why does this work? Key is fraction of shortcuts in the network

In a highly clustered, ordered network, a single random connection will create a shortcut that lowers L dramatically

Watts demonstrates that Small world graphs occur in graphs with a small number of shortcuts

Introduction to Random & StochasticApplications

How do we know if an observed graph fits the SW model?

Random expectations:For basic one-mode networks (such as acquaintance nets), we can

get approximate random values for L and C as:

Lrandom ~ ln(n) / ln(k)Crandom ~ k / n

As k and n get large.

Note that C essentially approaches zero as N increases, and K is assumed fixed. This formula uses the density-based measure of C, but the substantive implications are similar for the triad formula.

Introduction to Random & StochasticApplications

Reverse the random graph problem, given average tie volume and population size, what’s the expected size of a subpopulation?

http://www.soc.duke.edu/~jmoody77/Hydra/scaleupcalc.htm

Introduction to Random & StochasticApplications

Comparing multiple networks: QAP

The substantive question is how one set of relations (or dyadic attributes) relates to another. For example:

• Do marriage ties correlate with business ties in the Medici family network?• Are friendship relations correlated with joint membership in a club?

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

Assessing the correlation is straight forward, as we simply correlate each corresponding cell of the two matrices:

Marriage 1 ACCIAIUOL 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 ALBIZZI 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 3 BARBADORI 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 4 BISCHERI 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 5 CASTELLAN 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 6 GINORI 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 GUADAGNI 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0 1 8 LAMBERTES 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 9 MEDICI 1 1 1 0 0 0 0 0 0 0 0 0 1 1 0 1 10 PAZZI 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 11 PERUZZI 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 12 PUCCI 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 13 RIDOLFI 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 14 SALVIATI 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 15 STROZZI 0 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 16 TORNABUON 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0

Business 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 1 1 0 0 1 0 1 0 0 0 0 0 4 0 0 0 0 0 0 1 1 0 0 1 0 0 0 0 0 5 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 6 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 7 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 8 0 0 0 1 1 0 1 0 0 0 1 0 0 0 0 0 9 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 1 10 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 11 0 0 1 1 1 0 0 1 0 0 0 0 0 0 0 0 12 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 14 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0

Dyads:1 2 0 01 3 0 01 4 0 01 5 0 01 6 0 01 7 0 01 8 0 01 9 1 01 10 0 01 11 0 01 12 0 01 13 0 01 14 0 01 15 0 01 16 0 02 1 0 02 3 0 02 4 0 02 5 0 02 6 1 02 7 1 02 8 0 02 9 1 02 10 0 02 11 0 02 12 0 02 13 0 02 14 0 02 15 0 02 16 0 0

Correlation:

1 0.37186790.3718679 1

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

But is the observed value statistically significant?

Can’t use standard inference, since the assumptions are violated. Instead, we use a permutation approach.

Essentially, we are asking whether the observed correlation is large (small) compared to that which we would get if the assignment of variables to nodes were random, but the interdependencies within variables were maintained.

Do this by randomly sorting the rows and columns of the matrix, then re-estimating the correlation.

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

Comparing multiple networks: QAP

When you permute, you have to permute both the rows and the columns simultaneously to maintain the interdependencies in the data:

ID ORIG

A 0 1 2 3 4B 0 0 1 2 3C 0 0 0 1 2D 0 0 0 0 1E 0 0 0 0 0

Sorted A 0 3 1 2 4 D 0 0 0 0 1 B 0 2 0 1 3 C 0 1 0 0 2 E 0 0 0 0 0

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

Procedure:1. Calculate the observed correlation2. for K iterations do:

a) randomly sort one of the matricesb) recalculate the correlationc) store the outcome

3. compare the observed correlation to the distribution of correlations created by the random permutations.

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

QAP MATRIX CORRELATION--------------------------------------------------------------------------------

Observed matrix: PadgBUSStructure matrix: PadgMAR# of Permutations: 2500Random seed: 356

Univariate statistics

1 2 PadgBUS PadgMAR ------- ------- 1 Mean 0.125 0.167 2 Std Dev 0.331 0.373 3 Sum 30.000 40.000 4 Variance 0.109 0.139 5 SSQ 30.000 40.000 6 MCSSQ 26.250 33.333 7 Euc Norm 5.477 6.325 8 Minimum 0.000 0.000 9 Maximum 1.000 1.000 10 N of Obs 240.000 240.000

Hubert's gamma: 16.000

Bivariate Statistics

1 2 3 4 5 6 7 Value Signif Avg SD P(Large) P(Small) NPerm --------- --------- --------- --------- --------- --------- --------- 1 Pearson Correlation: 0.372 0.000 0.001 0.092 0.000 1.000 2500.000 2 Simple Matching: 0.842 0.000 0.750 0.027 0.000 1.000 2500.000 3 Jaccard Coefficient: 0.296 0.000 0.079 0.046 0.000 1.000 2500.000 4 Goodman-Kruskal Gamma: 0.797 0.000 -0.064 0.382 0.000 1.000 2500.000 5 Hamming Distance: 38.000 0.000 59.908 5.581 1.000 0.000 2500.000

This can be done simply in UCINET…Also in R

Using the same logic,we can estimate alternative models, such as regression, logits, probits, etc. Only complication is that you need to permute all of the independent matrices in the same way each iteration.

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

NODE ADJMAT SAMERCE SAMESEX 1 0 1 1 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 1 0 0 1 1 0 2 1 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 1 3 1 1 0 0 1 0 1 0 0 0 0 0 1 0 1 1 1 0 1 0 0 1 0 0 1 1 0 4 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 1 1 0 1 0 1 0 0 0 1 1 0 5 0 0 1 1 0 1 0 1 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 1 6 0 0 0 0 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 0 0 1 0 0 0 1 7 0 1 1 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 1 0 1 1 0 0 0 1 0 8 0 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 0 0 1 0 1 1 0 0 1 0 0 9 0 0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 0 0 0 1 0 0 1 1 0 0 0

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

Distance (Dij=abs(Yi-Yj).000 .277 .228 .181 .278 .298 .095 .307 .481.277 .000 .049 .096 .555 .575 .182 .584 .758.228 .049 .000 .047 .506 .526 .134 .535 .710.181 .096 .047 .000 .459 .479 .087 .488 .663.278 .555 .506 .459 .000 .020 .372 .029 .204.298 .575 .526 .479 .020 .000 .392 .009 .184.095 .182 .134 .087 .372 .392 .000 .401 .576.307 .584 .535 .488 .029 .009 .401 .000 .175.481 .758 .710 .663 .204 .184 .576 .175 .000

Y 0.32 0.59 0.54 0.50 0.04 0.02 0.41 0.01-0.17

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

# of permutations: 2000Diagonal valid? NORandom seed: 995Dependent variable: EX_SIMExpected values: C:\moody\Classes\soc884\examples\UCINET\mrqap-predictedIndependent variables: EX_SSEX EX_SRCE EX_ADJ

Number of valid observations among the X variables = 72N = 72

Number of permutations performed: 1999

MODEL FITR-square Adj R-Sqr Probability # of Obs-------- --------- ----------- ----------- 0.289 0.269 0.059 72

REGRESSION COEFFICIENTS

Un-stdized Stdized Proportion Proportion Independent Coefficient Coefficient Significance As Large As Small ----------- ----------- ----------- ------------ ----------- ----------- Intercept 0.460139 0.000000 0.034 0.034 0.966 EX_SSEX -0.073787 -0.170620 0.140 0.860 0.140 EX_SRCE -0.020472 -0.047338 0.272 0.728 0.272 EX_ADJ -0.239896 -0.536211 0.012 0.988 0.012

Peer-influence results on similarity dyad model, using QAP

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

Introduction to Random & StochasticUsing randomizations to avoid parametric assumptions

Introduction to Random & StochasticLatent Space Models

Z = a dimension in some unknown space that, once accounted for makes ties independent. Z is effectively chosen with respect to some latent cluster-space, G. These “groups” define different social sources for association.

Introduction to Random & StochasticLatent Space Models

Z = a dimension in some unknown space that, once accounted for makes ties independent. Z is effectively chosen with respect to some latent cluster-space, G. These “groups” define different social sources for association.

Introduction to Random & StochasticLatent Space Models

Introduction to Random & StochasticLatent Space Models

Prosper data, with three groups

Introduction to Random & StochasticLatent Space Models

Prosper data, with three groups (posterior density plots)

Introduction to Random & StochasticLatent Space Models

top related