optimal meshing

Post on 11-May-2015

192 Views

Category:

Sports

17 Downloads

Preview:

Click to see full reader

DESCRIPTION

The word optimal is used in different ways in mesh generation.  It could mean that the output is in some sense, "the best mesh" or that the algorithm is, by some measure, "the best algorithm".  One might hope that the best algorithm also produces the best mesh, but maybe some tradeoffs are necessary.  In this talk, I will survey several different notions of optimality in mesh generation and explore the different tradeoffs between them. The bias will be towards Delaunay/Voronoi methods.

TRANSCRIPT

Optimal Meshing

Don SheehyINRIA Saclay, France

soon: University of Connecticut

Mesh Generation

Mesh Generation

bias: Delaunay/Voronoi Refinement

Mesh Generation

bias: Delaunay/Voronoi Refinementwhy: We want theoretical guarantees.

Mesh Generation

bias: Delaunay/Voronoi Refinementwhy: We want theoretical guarantees.

Everything will be in d dimensions, where d is a constant.

Mesh Generation

bias: Delaunay/Voronoi Refinementwhy: We want theoretical guarantees.

Everything will be in d dimensions, where d is a constant.

Constants that depend only on d will be hidden by (really) big-O’s.

Optimality

Optimality

Quality

Optimality

Quality Goal: Maximize element quality (many choices for what this means).

Optimality

Quality

Mesh Size

Goal: Maximize element quality (many choices for what this means).

Optimality

Quality

Mesh SizeGoal: Minimize Number of Vertices/Simplices

Goal: Maximize element quality (many choices for what this means).

Optimality

Quality

Mesh SizeGoal: Minimize Number of Vertices/Simplices

Also: Graded according to density/sizing function.

Goal: Maximize element quality (many choices for what this means).

Optimality

Quality

Mesh Size

Running Time

Goal: Minimize Number of Vertices/Simplices

Also: Graded according to density/sizing function.

Goal: Maximize element quality (many choices for what this means).

Optimality

Quality

Mesh Size

Running Time

Goal: Minimize Number of Vertices/Simplices

Also: Graded according to density/sizing function.

Goal: Maximize element quality (many choices for what this means).

Goal: O(n log n + m) time.

Optimality

Quality

Mesh Size

Running Time

Goal: Minimize Number of Vertices/Simplices

Also: Graded according to density/sizing function.

Goal: Maximize element quality (many choices for what this means).

Goal: O(n log n + m) time.

The emphasis will be on asymptotic bounds and minimum requirements so as to produce the most general lower bounds.

Quality

Quality

Many different/competing notions of quality.

Quality

Many different/competing notions of quality.

We will focus on those that yield theoretical guarantees.

Quality

Many different/competing notions of quality.

We will focus on those that yield theoretical guarantees.

This talk: Voronoi Aspect Ratio

Quality

Many different/competing notions of quality.

We will focus on those that yield theoretical guarantees.

This talk: Voronoi Aspect Ratio

rv

Rvv

Quality

Many different/competing notions of quality.

We will focus on those that yield theoretical guarantees.

This talk: Voronoi Aspect Ratio

Issues:

rv

Rvv

Quality

Many different/competing notions of quality.

We will focus on those that yield theoretical guarantees.

This talk: Voronoi Aspect Ratio

Issues:slivers

rv

Rvv

Quality

Many different/competing notions of quality.

We will focus on those that yield theoretical guarantees.

This talk: Voronoi Aspect Ratio

Issues:sliversgeometric stability rv

Rvv

Quality

Many different/competing notions of quality.

We will focus on those that yield theoretical guarantees.

This talk: Voronoi Aspect Ratio

Issues:sliversgeometric stabilitypost-processing/smoothing

rv

Rvv

Mesh Size

Mesh Size

The feature size function: fP (x) = min{r : |ball(x, r) \ P | � 2}

Mesh Size

The feature size measure: µP (⌦) =

Z

1

fP (x)ddx

The feature size function: fP (x) = min{r : |ball(x, r) \ P | � 2}

Mesh Size

The feature size measure: µP (⌦) =

Z

1

fP (x)ddx

|M | = ⇥(µP (⌦))

The feature size function: fP (x) = min{r : |ball(x, r) \ P | � 2}

Mesh Size

Assumes boundary has “small” complexity.

The feature size measure: µP (⌦) =

Z

1

fP (x)ddx

|M | = ⇥(µP (⌦))

The feature size function: fP (x) = min{r : |ball(x, r) \ P | � 2}

Mesh Size

Assumes boundary has “small” complexity.

The feature size measure: µP (⌦) =

Z

1

fP (x)ddx

|M | = ⇥(µP (⌦))

The feature size function: fP (x) = min{r : |ball(x, r) \ P | � 2}

Mesh Size

Mesh Size

vRv

rv

Mesh Size

vRv

rv

Mesh Size

vRv

rv

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

Z

Vor(v)

dx

fP (x)d

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

Z

Vor(v)

dx

r

dv

Z

Vor(v)

dx

fP (x)d

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

Z

Bv

dx

r

dv

Z

Vor(v)

dx

r

dv

Z

Vor(v)

dx

fP (x)d

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

= V✓Rv

rv

◆d

Z

Bv

dx

r

dv

Z

Vor(v)

dx

r

dv

Z

Vor(v)

dx

fP (x)d

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

= V✓Rv

rv

◆d

Z

Bv

dx

r

dv

Z

Vor(v)

dx

r

dv

Z

Vor(v)

dx

fP (x)d

Z

bv

dx

fP (x)d

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

Z

bv

dx

(KRv)d = V

✓Rv

rv

◆d

Z

Bv

dx

r

dv

Z

Vor(v)

dx

r

dv

Z

Vor(v)

dx

fP (x)d

Z

bv

dx

fP (x)d

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

V✓

rvKRv

◆d

=

Z

bv

dx

(KRv)d = V

✓Rv

rv

◆d

Z

Bv

dx

r

dv

Z

Vor(v)

dx

r

dv

Z

Vor(v)

dx

fP (x)d

Z

bv

dx

fP (x)d

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv8x 2 Vor(v) : rv fP (x) KRv

V✓

rvKRv

◆d

=

Z

bv

dx

(KRv)d = V

✓Rv

rv

◆d

Z

Bv

dx

r

dv

Z

Vor(v)

dx

r

dv

Z

Vor(v)

dx

fP (x)d

Z

bv

dx

fP (x)d

✓1

K⌧

◆d

µP (Vor(v)) ⌧d

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

vRv

rv

There is at most and at least some constant amount of mass

in each Voronoi cell!

8x 2 Vor(v) : rv fP (x) KRv

V✓

rvKRv

◆d

=

Z

bv

dx

(KRv)d = V

✓Rv

rv

◆d

Z

Bv

dx

r

dv

Z

Vor(v)

dx

r

dv

Z

Vor(v)

dx

fP (x)d

Z

bv

dx

fP (x)d

✓1

K⌧

◆d

µP (Vor(v)) ⌧d

Prove your algorithm achieves thisalgorithm specific (not for this talk)

Mesh Size

Mesh Size

Domain: ⌦ ⇢ Rd

Mesh SizeM�

= {v 2 M | Vor(v) ✓ ⌦}

Domain: ⌦ ⇢ Rd

Mesh SizeM�

= {v 2 M | Vor(v) ✓ ⌦} M+= {v 2 M | Vor(v) \ ⌦ 6= ;}

Domain: ⌦ ⇢ Rd

Mesh SizeM�

= {v 2 M | Vor(v) ✓ ⌦} M+= {v 2 M | Vor(v) \ ⌦ 6= ;}

Domain: ⌦ ⇢ Rd

µP (⌦)

Mesh SizeM�

= {v 2 M | Vor(v) ✓ ⌦} M+= {v 2 M | Vor(v) \ ⌦ 6= ;}

Domain: ⌦ ⇢ Rd

µP (⌦)

X

v2M

µP (Vor(v) \ ⌦)

Mesh SizeM�

= {v 2 M | Vor(v) ✓ ⌦} M+= {v 2 M | Vor(v) \ ⌦ 6= ;}

Domain: ⌦ ⇢ Rd

µP (⌦)

X

v2M

µP (Vor(v) \ ⌦)

=

Mesh SizeM�

= {v 2 M | Vor(v) ✓ ⌦} M+= {v 2 M | Vor(v) \ ⌦ 6= ;}

Domain: ⌦ ⇢ Rd

µP (⌦)

X

v2M

µP (Vor(v) \ ⌦)

=

|M+| ⌧d

Mesh SizeM�

= {v 2 M | Vor(v) ✓ ⌦} M+= {v 2 M | Vor(v) \ ⌦ 6= ;}

Domain: ⌦ ⇢ Rd

µP (⌦)

X

v2M

µP (Vor(v) \ ⌦)

=

|M+| ⌧d|M�|✓

1

K⌧

◆d

Mesh SizeM�

= {v 2 M | Vor(v) ✓ ⌦} M+= {v 2 M | Vor(v) \ ⌦ 6= ;}

Domain: ⌦ ⇢ Rd

µP (⌦)

X

v2M

µP (Vor(v) \ ⌦)

=

|M+| ⌧d

Bounds are tight when |M+| ⇡ |M�|

|M�|✓

1

K⌧

◆d

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

pi

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

a = !pi "NN(pi)!

pi

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

b = !pi " 2NN(pi)!

a = !pi "NN(pi)!

pi

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

b = !pi " 2NN(pi)!

a = !pi "NN(pi)!

pi

The pacing of the ith point is !i =b

a.

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

b = !pi " 2NN(pi)!

a = !pi "NN(pi)!

pi

The pacing of the ith point is !i =b

a.

Let ! be the geometric mean, so!

log !i = n log !.

Order the points.

Mesh SizeTight per-instance bounds on the mesh size can be expressed in terms of the “pacing”.

b = !pi " 2NN(pi)!

a = !pi "NN(pi)!

pi

The pacing of the ith point is !i =b

a.

Let ! be the geometric mean, so!

log !i = n log !.

! is the pacing of the ordering.

Order the points.

Mesh SizeWe can write the feature size measure as a telescoping sum.

Mesh SizeWe can write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

Mesh SizeWe can write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

Mesh SizeWe can write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

Mesh SizeWe can write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

µPi(!)! µPi!1

(!) = "(1 + log !i)

When the boundary is “simple” and the first two points are not too close compared to the diameter,

Mesh SizeWe can write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

µPi(!)! µPi!1

(!) = "(1 + log !i)

When the boundary is “simple” and the first two points are not too close compared to the diameter,

Thus,

µP (⌦) = µP2(⌦) +⇥(n+ n log �)

Mesh SizeWe can write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

µPi(!)! µPi!1

(!) = "(1 + log !i)

When the boundary is “simple” and the first two points are not too close compared to the diameter,

Thus,

µP (⌦) = µP2(⌦) +⇥(n+ n log �)Measure induced by just two points.

Mesh SizeWe can write the feature size measure as a telescoping sum.

Pi = {p1, . . . , pi}

effect of adding the ith point.

µP = µP2+

n!

i=3

"

µPi! µPi!1

#

µPi(!)! µPi!1

(!) = "(1 + log !i)

When the boundary is “simple” and the first two points are not too close compared to the diameter,

Thus,

µP (⌦) = µP2(⌦) +⇥(n+ n log �)Measure induced by just two points.

Output Mesh Size

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Mesh SizeThe previous bound implies that there is only one necessary but insufficient condition for the output size to be superlinear in the number of input points.

[ Picture of bad case ]

Running Time

Running Time

In an incremental construction, the points are added one at a time.

Running Time

In an incremental construction, the points are added one at a time.

Where is the work?

Running Time

In an incremental construction, the points are added one at a time.

Where is the work?

1. Point Location O(log n) per input vertex

Running Time

In an incremental construction, the points are added one at a time.

Where is the work?

1. Point Location

2. Local Updates

O(log n) per input vertex

O(1) per vertex

Running Time

In an incremental construction, the points are added one at a time.

Where is the work?

1. Point Location

2. Local Updates

O(log n) per input vertex

O(1) per vertex

Goal: O(n log n + m)

Running Time

Running Time

1 Keep it quality. Keep it sparse.

Running Time

1 Keep it quality. Keep it sparse.

2 Avoid the one bad case. Use hierarchical structure.

Running Time

1 Keep it quality. Keep it sparse.

2 Avoid the one bad case. Use hierarchical structure.

3 Preprocess the input vertices for fast point location.

Running Time

1 Keep it quality. Keep it sparse.

Running Time

1 Keep it quality. Keep it sparse.

Incremental Construction

Running Time

1 Keep it quality. Keep it sparse.

Incremental Construction

Recover input (vertices or features)

Running Time

1 Keep it quality. Keep it sparse.

Incremental Construction

Recover input (vertices or features)

Refine

Running Time

1 Keep it quality. Keep it sparse.

Incremental Construction

Recover input (vertices or features)

Refine an

Running Time

1 Keep it quality. Keep it sparse.

Incremental Construction

Recover input (vertices or features)

Refine

Loop

an

Running Time

1 Keep it quality. Keep it sparse.

Incremental Construction

Recover input (vertices or features)

Refine

Loop

an

Since the mesh is always quality, we avoid the worst case for Voronoi diagrams.

Insertions only require a constant number of local updates.

Running Time

2 Avoid the one bad case.

Running Time

2 Avoid the one bad case.

Running Time

2 Avoid the one bad case.

Running Time

2 Avoid the one bad case.

If you see a big empty annulus, do something different.

Running Time

2 Avoid the one bad case.

If you see a big empty annulus, do something different. - hierarchies

Running Time

2 Avoid the one bad case.

If you see a big empty annulus, do something different. - hierarchies - delayed input

Running Time

2 Avoid the one bad case.

If you see a big empty annulus, do something different. - hierarchies - delayed input

Linear-size meshes are possible by relaxing the quality condition for this one case. [MPS08, HMPS09, MPS11, S12, MSV13]

Running Time

3 Preprocess the input vertices for fast point location.

Running Time

3 Preprocess the input vertices for fast point location.

Running Time

3 Preprocess the input vertices for fast point location.

Running Time

3 Preprocess the input vertices for fast point location.

Running Time

3 Preprocess the input vertices for fast point location.

Running Time

3 Preprocess the input vertices for fast point location.

How many steps?

Running Time

3 Preprocess the input vertices for fast point location.

How many steps?If starting from nearest inserted input point, we only need to take a constant number of steps.

Running Time

3 Preprocess the input vertices for fast point location.

How many steps?If starting from nearest inserted input point, we only need to take a constant number of steps.

Ordering input points takes O(n log n) time.

Overview

Overview

A Defense of Theory:

Overview

A Defense of Theory:General lower bounds

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.We focused on Voronoi Aspect Ratio

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.We focused on Voronoi Aspect Ratio

Optimal Mesh Size:

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.We focused on Voronoi Aspect Ratio

Optimal Mesh Size:The feature size measure determines mesh size.

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.We focused on Voronoi Aspect Ratio

Optimal Mesh Size:The feature size measure determines mesh size.The pacing determines the feature size measure.

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.We focused on Voronoi Aspect Ratio

Optimal Mesh Size:The feature size measure determines mesh size.The pacing determines the feature size measure.

Algorithmic suggestions for optimal Running time:

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.We focused on Voronoi Aspect Ratio

Optimal Mesh Size:The feature size measure determines mesh size.The pacing determines the feature size measure.

Algorithmic suggestions for optimal Running time:Use the Sparse Meshing paradigm.

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.We focused on Voronoi Aspect Ratio

Optimal Mesh Size:The feature size measure determines mesh size.The pacing determines the feature size measure.

Algorithmic suggestions for optimal Running time:Use the Sparse Meshing paradigm.Adapt to large pacing.

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.We focused on Voronoi Aspect Ratio

Optimal Mesh Size:The feature size measure determines mesh size.The pacing determines the feature size measure.

Algorithmic suggestions for optimal Running time:Use the Sparse Meshing paradigm.Adapt to large pacing.Preprocess for walk-based point location

Overview

A Defense of Theory:General lower boundsTheory can guide practice

Mesh Quality:Many choices.We focused on Voronoi Aspect Ratio

Optimal Mesh Size:The feature size measure determines mesh size.The pacing determines the feature size measure.

Algorithmic suggestions for optimal Running time:Use the Sparse Meshing paradigm.Adapt to large pacing.Preprocess for walk-based point location

Thank You

top related