02 analysis of algorithms: divide and conquer

Post on 16-Jul-2015

147 Views

Category:

Engineering

3 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Analysis of AlgorithmsDivide and Conquer

Andres Mendez-Vazquez

September 10, 2015

1 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

2 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

3 / 65

Divide an Conquer

Divide et imperaA classic technique based on the multi-based recursion.

Thus, we haveThat Divide and Conquer works by recursively breaking down the probleminto subproblems and solving those subproblems recursively.

Until you reach a basis case!!!

RemarkGiven the fact of the following equivalence:

Recursion ≡ Iteration (1)

4 / 65

Divide an Conquer

Divide et imperaA classic technique based on the multi-based recursion.

Thus, we haveThat Divide and Conquer works by recursively breaking down the probleminto subproblems and solving those subproblems recursively.

Until you reach a basis case!!!

RemarkGiven the fact of the following equivalence:

Recursion ≡ Iteration (1)

4 / 65

Divide an Conquer

Divide et imperaA classic technique based on the multi-based recursion.

Thus, we haveThat Divide and Conquer works by recursively breaking down the probleminto subproblems and solving those subproblems recursively.

Until you reach a basis case!!!

RemarkGiven the fact of the following equivalence:

Recursion ≡ Iteration (1)

4 / 65

Divide an Conquer

Divide et imperaA classic technique based on the multi-based recursion.

Thus, we haveThat Divide and Conquer works by recursively breaking down the probleminto subproblems and solving those subproblems recursively.

Until you reach a basis case!!!

RemarkGiven the fact of the following equivalence:

Recursion ≡ Iteration (1)

4 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

5 / 65

Gauss and the Beginning

Carl Friedrich Gauss (1777–1855)He devised a way to multiply two imaginary numbers as

(a + bi) (c + di) = ac + (ad + bc) i − bd (2)

By realizing that

bc + ad = (a + b) (c + d)− ac − bd (3)

Thus minimizing the number of multiplications from four to three.

ActuallyWe can represent binary numbers like 1001 as 1000 + 01 = 22 × 10 + 01

6 / 65

Gauss and the Beginning

Carl Friedrich Gauss (1777–1855)He devised a way to multiply two imaginary numbers as

(a + bi) (c + di) = ac + (ad + bc) i − bd (2)

By realizing that

bc + ad = (a + b) (c + d)− ac − bd (3)

Thus minimizing the number of multiplications from four to three.

ActuallyWe can represent binary numbers like 1001 as 1000 + 01 = 22 × 10 + 01

6 / 65

Gauss and the Beginning

Carl Friedrich Gauss (1777–1855)He devised a way to multiply two imaginary numbers as

(a + bi) (c + di) = ac + (ad + bc) i − bd (2)

By realizing that

bc + ad = (a + b) (c + d)− ac − bd (3)

Thus minimizing the number of multiplications from four to three.

ActuallyWe can represent binary numbers like 1001 as 1000 + 01 = 22 × 10 + 01

6 / 65

Thus

We can represent numbers x , y asx = xL xR = 2n/2xL + xR

y = yL yR = 2n/2yL + yR

Thus, the multiplication can be found by usingxy =

(2n/2xL + xR

) (2n/2yL + yR

)= 2nxLyL + 2n/2(xLyR + xRyL) + xRyR (4)

Howeverif we use the Gauss’s trick, we only need xLyL, xRyR, (xL + xR)(yL + yR)to calculate the multiplication:

xLyR + xRyL = (xL + xR)(yL + yR)− xLyL − xRyR

7 / 65

Thus

We can represent numbers x , y asx = xL xR = 2n/2xL + xR

y = yL yR = 2n/2yL + yR

Thus, the multiplication can be found by usingxy =

(2n/2xL + xR

) (2n/2yL + yR

)= 2nxLyL + 2n/2(xLyR + xRyL) + xRyR (4)

Howeverif we use the Gauss’s trick, we only need xLyL, xRyR, (xL + xR)(yL + yR)to calculate the multiplication:

xLyR + xRyL = (xL + xR)(yL + yR)− xLyL − xRyR

7 / 65

Thus

We can represent numbers x , y asx = xL xR = 2n/2xL + xR

y = yL yR = 2n/2yL + yR

Thus, the multiplication can be found by usingxy =

(2n/2xL + xR

) (2n/2yL + yR

)= 2nxLyL + 2n/2(xLyR + xRyL) + xRyR (4)

Howeverif we use the Gauss’s trick, we only need xLyL, xRyR, (xL + xR)(yL + yR)to calculate the multiplication:

xLyR + xRyL = (xL + xR)(yL + yR)− xLyL − xRyR

7 / 65

Thus

We can represent numbers x , y asx = xL xR = 2n/2xL + xR

y = yL yR = 2n/2yL + yR

Thus, the multiplication can be found by usingxy =

(2n/2xL + xR

) (2n/2yL + yR

)= 2nxLyL + 2n/2(xLyR + xRyL) + xRyR (4)

Howeverif we use the Gauss’s trick, we only need xLyL, xRyR, (xL + xR)(yL + yR)to calculate the multiplication:

xLyR + xRyL = (xL + xR)(yL + yR)− xLyL − xRyR

7 / 65

Thus

We can represent numbers x , y asx = xL xR = 2n/2xL + xR

y = yL yR = 2n/2yL + yR

Thus, the multiplication can be found by usingxy =

(2n/2xL + xR

) (2n/2yL + yR

)= 2nxLyL + 2n/2(xLyR + xRyL) + xRyR (4)

Howeverif we use the Gauss’s trick, we only need xLyL, xRyR, (xL + xR)(yL + yR)to calculate the multiplication:

xLyR + xRyL = (xL + xR)(yL + yR)− xLyL − xRyR

7 / 65

Now, You have this...

We have thatxy can be calculated by using the two parts, Left and Right.

ThenThus, each xLxL, (xL + xR)(yL + yR) and xRyR can be calculated in asimilar way.

RecursionThis is know as a Recursive Procedure!!!

8 / 65

Now, You have this...

We have thatxy can be calculated by using the two parts, Left and Right.

ThenThus, each xLxL, (xL + xR)(yL + yR) and xRyR can be calculated in asimilar way.

RecursionThis is know as a Recursive Procedure!!!

8 / 65

Now, You have this...

We have thatxy can be calculated by using the two parts, Left and Right.

ThenThus, each xLxL, (xL + xR)(yL + yR) and xRyR can be calculated in asimilar way.

RecursionThis is know as a Recursive Procedure!!!

8 / 65

Complexities

Old Multiplication

T (n) = 4T(n

2

)+ Some Work (5)

New Multiplication

T (n) = 3T (n) + Some Work (6)

We will prove thatFor old style multiplications O

(n2).

For new style multiplications O(nlog2 3

)

9 / 65

Complexities

Old Multiplication

T (n) = 4T(n

2

)+ Some Work (5)

New Multiplication

T (n) = 3T (n) + Some Work (6)

We will prove thatFor old style multiplications O

(n2).

For new style multiplications O(nlog2 3

)

9 / 65

Complexities

Old Multiplication

T (n) = 4T(n

2

)+ Some Work (5)

New Multiplication

T (n) = 3T (n) + Some Work (6)

We will prove thatFor old style multiplications O

(n2).

For new style multiplications O(nlog2 3

)

9 / 65

Complexities

Old Multiplication

T (n) = 4T(n

2

)+ Some Work (5)

New Multiplication

T (n) = 3T (n) + Some Work (6)

We will prove thatFor old style multiplications O

(n2).

For new style multiplications O(nlog2 3

)

9 / 65

Epitaph

We can do divide and conquerIn a really unclever way!!!

Or we can go and design something betterThus, improving speedup!!!

The difference betweenA great design...Or a crappy job...

10 / 65

Epitaph

We can do divide and conquerIn a really unclever way!!!

Or we can go and design something betterThus, improving speedup!!!

The difference betweenA great design...Or a crappy job...

10 / 65

Epitaph

We can do divide and conquerIn a really unclever way!!!

Or we can go and design something betterThus, improving speedup!!!

The difference betweenA great design...Or a crappy job...

10 / 65

Epitaph

We can do divide and conquerIn a really unclever way!!!

Or we can go and design something betterThus, improving speedup!!!

The difference betweenA great design...Or a crappy job...

10 / 65

Recursion is the base of Divide and Conquer

This is the natural way we do many thingsWe always attack smaller versions first of the large one!!!

Stephen Cole KleeneHe defined the basics about the use of recursion.

11 / 65

Recursion is the base of Divide and Conquer

This is the natural way we do many thingsWe always attack smaller versions first of the large one!!!

Stephen Cole KleeneHe defined the basics about the use of recursion.

11 / 65

Kleene and CompanySome facts about him

Stephen Cole Kleene (January 5, 1909 – January 25, 1994) was anAmerican mathematician.One of the students of Alonzo Church!!!

I Church is best known for the lambda calculus, Church–Turing thesisand proving the undecidability of the use of an algorithm to sayYes(Valid) or No(No Valid) to a first order logic statement on a FOLSystem (Proposed by David Hilbert).

Recursion TheoryKleene, along with Alan Turing, Emil Post, and others, is best knownas a founder of the branch of mathematical logic known as recursiontheory.This theory subsequently helped to provide the foundations oftheoretical computer science.

12 / 65

Kleene and CompanySome facts about him

Stephen Cole Kleene (January 5, 1909 – January 25, 1994) was anAmerican mathematician.One of the students of Alonzo Church!!!

I Church is best known for the lambda calculus, Church–Turing thesisand proving the undecidability of the use of an algorithm to sayYes(Valid) or No(No Valid) to a first order logic statement on a FOLSystem (Proposed by David Hilbert).

Recursion TheoryKleene, along with Alan Turing, Emil Post, and others, is best knownas a founder of the branch of mathematical logic known as recursiontheory.This theory subsequently helped to provide the foundations oftheoretical computer science.

12 / 65

Kleene and CompanySome facts about him

Stephen Cole Kleene (January 5, 1909 – January 25, 1994) was anAmerican mathematician.One of the students of Alonzo Church!!!

I Church is best known for the lambda calculus, Church–Turing thesisand proving the undecidability of the use of an algorithm to sayYes(Valid) or No(No Valid) to a first order logic statement on a FOLSystem (Proposed by David Hilbert).

Recursion TheoryKleene, along with Alan Turing, Emil Post, and others, is best knownas a founder of the branch of mathematical logic known as recursiontheory.This theory subsequently helped to provide the foundations oftheoretical computer science.

12 / 65

Kleene and CompanySome facts about him

Stephen Cole Kleene (January 5, 1909 – January 25, 1994) was anAmerican mathematician.One of the students of Alonzo Church!!!

I Church is best known for the lambda calculus, Church–Turing thesisand proving the undecidability of the use of an algorithm to sayYes(Valid) or No(No Valid) to a first order logic statement on a FOLSystem (Proposed by David Hilbert).

Recursion TheoryKleene, along with Alan Turing, Emil Post, and others, is best knownas a founder of the branch of mathematical logic known as recursiontheory.This theory subsequently helped to provide the foundations oftheoretical computer science.

12 / 65

Kleene and CompanySome facts about him

Stephen Cole Kleene (January 5, 1909 – January 25, 1994) was anAmerican mathematician.One of the students of Alonzo Church!!!

I Church is best known for the lambda calculus, Church–Turing thesisand proving the undecidability of the use of an algorithm to sayYes(Valid) or No(No Valid) to a first order logic statement on a FOLSystem (Proposed by David Hilbert).

Recursion TheoryKleene, along with Alan Turing, Emil Post, and others, is best knownas a founder of the branch of mathematical logic known as recursiontheory.This theory subsequently helped to provide the foundations oftheoretical computer science.

12 / 65

Recursion

Something NotableSometimes it is difficult to define an object explicitly.It may be easy to define this object in smaller version of itself.This process is called recursion!!!

ThusWe can use recursion to define sequences, functions, and sets.

Examplean = 2n for n = 0, 1, 2, . . . =⇒ 1, 2, 4, 8, 16, 32, . . .

Thus, the sequence can be defined in a recursive way:

an+1 = 2× an (7)

13 / 65

Recursion

Something NotableSometimes it is difficult to define an object explicitly.It may be easy to define this object in smaller version of itself.This process is called recursion!!!

ThusWe can use recursion to define sequences, functions, and sets.

Examplean = 2n for n = 0, 1, 2, . . . =⇒ 1, 2, 4, 8, 16, 32, . . .

Thus, the sequence can be defined in a recursive way:

an+1 = 2× an (7)

13 / 65

Recursion

Something NotableSometimes it is difficult to define an object explicitly.It may be easy to define this object in smaller version of itself.This process is called recursion!!!

ThusWe can use recursion to define sequences, functions, and sets.

Examplean = 2n for n = 0, 1, 2, . . . =⇒ 1, 2, 4, 8, 16, 32, . . .

Thus, the sequence can be defined in a recursive way:

an+1 = 2× an (7)

13 / 65

Recursion

Something NotableSometimes it is difficult to define an object explicitly.It may be easy to define this object in smaller version of itself.This process is called recursion!!!

ThusWe can use recursion to define sequences, functions, and sets.

Examplean = 2n for n = 0, 1, 2, . . . =⇒ 1, 2, 4, 8, 16, 32, . . .

Thus, the sequence can be defined in a recursive way:

an+1 = 2× an (7)

13 / 65

Recursion

Something NotableSometimes it is difficult to define an object explicitly.It may be easy to define this object in smaller version of itself.This process is called recursion!!!

ThusWe can use recursion to define sequences, functions, and sets.

Examplean = 2n for n = 0, 1, 2, . . . =⇒ 1, 2, 4, 8, 16, 32, . . .

Thus, the sequence can be defined in a recursive way:

an+1 = 2× an (7)

13 / 65

Recursion

Something NotableSometimes it is difficult to define an object explicitly.It may be easy to define this object in smaller version of itself.This process is called recursion!!!

ThusWe can use recursion to define sequences, functions, and sets.

Examplean = 2n for n = 0, 1, 2, . . . =⇒ 1, 2, 4, 8, 16, 32, . . .

Thus, the sequence can be defined in a recursive way:

an+1 = 2× an (7)

13 / 65

Recursion

Something NotableSometimes it is difficult to define an object explicitly.It may be easy to define this object in smaller version of itself.This process is called recursion!!!

ThusWe can use recursion to define sequences, functions, and sets.

Examplean = 2n for n = 0, 1, 2, . . . =⇒ 1, 2, 4, 8, 16, 32, . . .

Thus, the sequence can be defined in a recursive way:

an+1 = 2× an (7)

13 / 65

Recursively Defined Functions

FirstAssume T is a function with the set of nonnegative integers as its domain.

SecondWe use two steps to define T :

Basis step:Specify the value of T (0).

Recursive step:Give a rule for T (x) using T (y) where 0 ≤ y < x.

ThusSuch a definition is called a recursive or inductive definition.

14 / 65

Recursively Defined Functions

FirstAssume T is a function with the set of nonnegative integers as its domain.

SecondWe use two steps to define T :

Basis step:Specify the value of T (0).

Recursive step:Give a rule for T (x) using T (y) where 0 ≤ y < x.

ThusSuch a definition is called a recursive or inductive definition.

14 / 65

Recursively Defined Functions

FirstAssume T is a function with the set of nonnegative integers as its domain.

SecondWe use two steps to define T :

Basis step:Specify the value of T (0).

Recursive step:Give a rule for T (x) using T (y) where 0 ≤ y < x.

ThusSuch a definition is called a recursive or inductive definition.

14 / 65

Recursively Defined Functions

FirstAssume T is a function with the set of nonnegative integers as its domain.

SecondWe use two steps to define T :

Basis step:Specify the value of T (0).

Recursive step:Give a rule for T (x) using T (y) where 0 ≤ y < x.

ThusSuch a definition is called a recursive or inductive definition.

14 / 65

Recursively Defined Functions

FirstAssume T is a function with the set of nonnegative integers as its domain.

SecondWe use two steps to define T :

Basis step:Specify the value of T (0).

Recursive step:Give a rule for T (x) using T (y) where 0 ≤ y < x.

ThusSuch a definition is called a recursive or inductive definition.

14 / 65

Recursively Defined Functions

FirstAssume T is a function with the set of nonnegative integers as its domain.

SecondWe use two steps to define T :

Basis step:Specify the value of T (0).

Recursive step:Give a rule for T (x) using T (y) where 0 ≤ y < x.

ThusSuch a definition is called a recursive or inductive definition.

14 / 65

Recursively Defined Functions

FirstAssume T is a function with the set of nonnegative integers as its domain.

SecondWe use two steps to define T :

Basis step:Specify the value of T (0).

Recursive step:Give a rule for T (x) using T (y) where 0 ≤ y < x.

ThusSuch a definition is called a recursive or inductive definition.

14 / 65

Example

Can you give me the following?Give an inductive definition of the factorial function T (n) = n!.

Base caseWhich is the base case?

Recursive caseWhat is the recursive case?

15 / 65

Example

Can you give me the following?Give an inductive definition of the factorial function T (n) = n!.

Base caseWhich is the base case?

Recursive caseWhat is the recursive case?

15 / 65

Example

Can you give me the following?Give an inductive definition of the factorial function T (n) = n!.

Base caseWhich is the base case?

Recursive caseWhat is the recursive case?

15 / 65

We can go further...

Recursively defined sets and structuresAssume S is a set. We use two steps to define the elements of S .

Basis StepSpecify an initial collection of elements.

Recursive StepGive a rule for forming new elements from those already known to be in S .

16 / 65

We can go further...

Recursively defined sets and structuresAssume S is a set. We use two steps to define the elements of S .

Basis StepSpecify an initial collection of elements.

Recursive StepGive a rule for forming new elements from those already known to be in S .

16 / 65

We can go further...

Recursively defined sets and structuresAssume S is a set. We use two steps to define the elements of S .

Basis StepSpecify an initial collection of elements.

Recursive StepGive a rule for forming new elements from those already known to be in S .

16 / 65

Example

ConsiderConsider S ⊆ Z defined by...

Basis Step3 ∈ S

Recursive StepIf x ∈ S and y ∈ S , then x + y ∈ S .

17 / 65

Example

ConsiderConsider S ⊆ Z defined by...

Basis Step3 ∈ S

Recursive StepIf x ∈ S and y ∈ S , then x + y ∈ S .

17 / 65

Example

ConsiderConsider S ⊆ Z defined by...

Basis Step3 ∈ S

Recursive StepIf x ∈ S and y ∈ S , then x + y ∈ S .

17 / 65

Example

Elements3 ∈ S3 + 3 = 6 ∈ S6 + 3 = 9 ∈ S6 + 6 = 12 ∈ S· · ·

18 / 65

Application: Divide and Conquer

1 Divide2 Conquer3 Combine

DivideSplit problem into a number ofsubproblems.

19 / 65

Application: Divide and Conquer

1 Divide2 Conquer3 Combine

Average SequenceSolve each subproblem recursively.

19 / 65

Application: Divide and Conquer

1 Divide2 Conquer3 Combine

Average SequenceThe solution of the problems into thesolution of the original problem.

19 / 65

Divide and Conquer Example: Merge Sort

Merge-Sort(A,p,r)1 if p < r then2 q ←

⌊p+r2⌋

3 Merge-Sort(A,p,q)4 Merge-Sort(A,q+1,r)5 MERGE(A,p,q,r)

ExplanationDivide part into theconquer!!!

20 / 65

Divide and Conquer Example: Merge Sort

Merge-Sort(A,p,r)1 if p < r then2 q ←

⌊p+r2⌋

3 Merge-Sort(A,p,q)4 Merge-Sort(A,q+1,r)5 MERGE(A,p,q,r)

ExplanationThe combine part

20 / 65

Merge SortMerge(A, p, q, r )

1 n1 ← q − p + 1, n2 ← r − p2 let L[1, 2, ..., n1 + 1] and

R[1, 2, ..., n2 + 1] be new arrays.3 for i ← 1 to n14 L[i]← A[p + i − 1]5 forj ← 1 to n26 R[i]← A[q + j]7 L[n1 + 1]←∞8 R[n2 + 1]←∞9 i ← 1, j ← 110 for k ← p to r11 if L[i] ≤ R[j] then12 A[k]← L[i]13 i ← i + 114 else15 A[k]← R[j]16 j ← j + 1

ExplanationCopy all to be mergedlists into two containers.

21 / 65

Merge SortMerge(A, p, q, r )

1 n1 ← q − p + 1, n2 ← r − p2 let L[1, 2, ..., n1 + 1] and

R[1, 2, ..., n2 + 1] be new arrays.3 for i ← 1 to n14 L[i]← A[p + i − 1]5 forj ← 1 to n26 R[i]← A[q + j]7 L[n1 + 1]←∞8 R[n2 + 1]←∞9 i ← 1, j ← 110 for k ← p to r11 if L[i] ≤ R[j] then12 A[k]← L[i]13 i ← i + 114 else15 A[k]← R[j]16 j ← j + 1

ExplanationMerging part.

21 / 65

The Merge Sort Recursion Cost Function

22 / 65

Thus, we have

Each Step for ONE Merging takes...A certain constant time c!!!

Thus, if we merge n elementsTotal time at level 1 of recursion:

cn (8)

In addition...We have that the recursion split each work by

12i , for i = 1, ..., log n (9)

23 / 65

Thus, we have

Each Step for ONE Merging takes...A certain constant time c!!!

Thus, if we merge n elementsTotal time at level 1 of recursion:

cn (8)

In addition...We have that the recursion split each work by

12i , for i = 1, ..., log n (9)

23 / 65

Thus, we have

Each Step for ONE Merging takes...A certain constant time c!!!

Thus, if we merge n elementsTotal time at level 1 of recursion:

cn (8)

In addition...We have that the recursion split each work by

12i , for i = 1, ..., log n (9)

23 / 65

Thus, we have the following Recursion

Base Case n = 1

T (n) = C (10)

Recursive Step n > 1

2T(n

2

)+ cn (11)

Finally

T (n) =

c if n = 12T

(n2)

+ cn if n > 1(12)

24 / 65

Thus, we have the following Recursion

Base Case n = 1

T (n) = C (10)

Recursive Step n > 1

2T(n

2

)+ cn (11)

Finally

T (n) =

c if n = 12T

(n2)

+ cn if n > 1(12)

24 / 65

Thus, we have the following Recursion

Base Case n = 1

T (n) = C (10)

Recursive Step n > 1

2T(n

2

)+ cn (11)

Finally

T (n) =

c if n = 12T

(n2)

+ cn if n > 1(12)

24 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

25 / 65

Recursion and Induction

Something NotableWhen a sequence is defined recursively, mathematical induction can beused to prove results about the sequence.

26 / 65

Example

We want...Show that the set S is the set A of all positive integers that are multiplesof 3.

First A ⊆ SShow that ∀k ≥ 1, P (k) =⇒ P (k + 1)

Define inductive hypothesisP (k) : 3k ∈ S

27 / 65

Example

We want...Show that the set S is the set A of all positive integers that are multiplesof 3.

First A ⊆ SShow that ∀k ≥ 1, P (k) =⇒ P (k + 1)

Define inductive hypothesisP (k) : 3k ∈ S

27 / 65

Example

We want...Show that the set S is the set A of all positive integers that are multiplesof 3.

First A ⊆ SShow that ∀k ≥ 1, P (k) =⇒ P (k + 1)

Define inductive hypothesisP (k) : 3k ∈ S

27 / 65

Example

We know the following:3 ∈ S

We need to proveP (k + 1) : 3 (k + 1) = 3k + 3 ∈ S

We have using the recursive definition3 + 3k ∈ S

28 / 65

Example

We know the following:3 ∈ S

We need to proveP (k + 1) : 3 (k + 1) = 3k + 3 ∈ S

We have using the recursive definition3 + 3k ∈ S

28 / 65

Example

We know the following:3 ∈ S

We need to proveP (k + 1) : 3 (k + 1) = 3k + 3 ∈ S

We have using the recursive definition3 + 3k ∈ S

28 / 65

Example

FinallyBy Mathematical Induction 3n ∈ S =⇒ A ⊆ S .

Now, show that S ⊆ AOr ∀x (x ∈ S ⇒ x ∈ A)

We know thatBasis Step: 3 ∈ S ⇒ 3 ∈ ARecursive Step: x ∈ S , y ∈ S ⇒ x + y ∈ S

29 / 65

Example

FinallyBy Mathematical Induction 3n ∈ S =⇒ A ⊆ S .

Now, show that S ⊆ AOr ∀x (x ∈ S ⇒ x ∈ A)

We know thatBasis Step: 3 ∈ S ⇒ 3 ∈ ARecursive Step: x ∈ S , y ∈ S ⇒ x + y ∈ S

29 / 65

Example

FinallyBy Mathematical Induction 3n ∈ S =⇒ A ⊆ S .

Now, show that S ⊆ AOr ∀x (x ∈ S ⇒ x ∈ A)

We know thatBasis Step: 3 ∈ S ⇒ 3 ∈ ARecursive Step: x ∈ S , y ∈ S ⇒ x + y ∈ S

29 / 65

Example

FirstEach element of x ∈ S is a multiple of 3 by the recursive step.

ThusIf x, y ∈ S then x + y ∈ S and x, y ∈ S

Againx + y is a multiple of 3, then x + y ∈ A

30 / 65

Example

FirstEach element of x ∈ S is a multiple of 3 by the recursive step.

ThusIf x, y ∈ S then x + y ∈ S and x, y ∈ S

Againx + y is a multiple of 3, then x + y ∈ A

30 / 65

Example

FirstEach element of x ∈ S is a multiple of 3 by the recursive step.

ThusIf x, y ∈ S then x + y ∈ S and x, y ∈ S

Againx + y is a multiple of 3, then x + y ∈ A

30 / 65

Structural induction

Something NotableInstead of mathematical induction to prove a result about a recursivelydefined sets, we can used more convenient form of induction known asstructural induction.

FirstLet n ∈ S , show P(n) is true using structural induction.

Basis StepAssume j is an element specified in the basis step of the definition.Show ∀j P(j) is true.

31 / 65

Structural induction

Something NotableInstead of mathematical induction to prove a result about a recursivelydefined sets, we can used more convenient form of induction known asstructural induction.

FirstLet n ∈ S , show P(n) is true using structural induction.

Basis StepAssume j is an element specified in the basis step of the definition.Show ∀j P(j) is true.

31 / 65

Structural induction

Something NotableInstead of mathematical induction to prove a result about a recursivelydefined sets, we can used more convenient form of induction known asstructural induction.

FirstLet n ∈ S , show P(n) is true using structural induction.

Basis StepAssume j is an element specified in the basis step of the definition.Show ∀j P(j) is true.

31 / 65

Structural induction

Recursive stepLet x be a new element constructed in the recursive step of thedefinition.Assume k1, k2, ..., km are elements used to construct an element x inthe recursive step of the definition.Show ∀ k1, k2, ..., km ((P (k1) ∧ P (k2) ∧ . . . ∧ P (km))→ P (x)).

32 / 65

Structural induction

Recursive stepLet x be a new element constructed in the recursive step of thedefinition.Assume k1, k2, ..., km are elements used to construct an element x inthe recursive step of the definition.Show ∀ k1, k2, ..., km ((P (k1) ∧ P (k2) ∧ . . . ∧ P (km))→ P (x)).

32 / 65

Structural induction

Recursive stepLet x be a new element constructed in the recursive step of thedefinition.Assume k1, k2, ..., km are elements used to construct an element x inthe recursive step of the definition.Show ∀ k1, k2, ..., km ((P (k1) ∧ P (k2) ∧ . . . ∧ P (km))→ P (x)).

32 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

33 / 65

Example of an AlgorithmData: Unsorted Sequence AResult: Sort Sequence AInsertion Sort(A)for j ← 2 to lenght(A) do

key ← A[j];// Insert A[j] Insert A[j] into the sorted sequence

A[1, ..., j − 1]i ← j − 1;while i > 0 and A[i] > key do

A[i + 1]← A[i];i ← i − 1;

endA[i + 1]← key

end

34 / 65

Using Induction to prove algorithm correctness

Inductive proof StructureI Initial input

F Input of nelements

I Steps1 Initialization2 Maintenance3 Termination

ExampleBe always sure about your input

35 / 65

Using Induction to prove algorithm correctness

Inductive proof StructureI Initial input

F Input of nelements

I Steps1 Initialization2 Maintenance3 Termination

ExampleTrue for one elementFor example, A[1] is an alreadysorted array.

35 / 65

Using Induction to prove algorithm correctness

Inductive proof StructureI Initial input

F Input of nelements

I Steps1 Initialization2 Maintenance3 Termination

ExampleTrue to j − 1 elements.The insertion sort mantains thesorted property during and afterthe loop.

35 / 65

Using Induction to prove algorithm correctness

Inductive proof StructureI Initial input

F Input of nelements

I Steps1 Initialization2 Maintenance3 Termination

ExampleTrue for n elements.At the end of the algorithmA[1, ..., n] is a sorted arraywhich is all the array.

35 / 65

In more detail: Thanks Luis Rodriguez Oracle Master 2012First, we define the following sets with sorted elements

Less = 〈x1, ..., xk |xi < key, i = 1, .., k〉Greater = 〈x1, ..., xm |xj > key, j = 1, .., m〉I = elements still not compared to the key

InitializationWe have A[1...1] with only one element ⇒ it is sorted

MaintanenceBefore we enter to the inner while loop, we have

1 A[1..j − 1] an already sorted array2 Less = ∅3 Greater = ∅4 I = A[1...j − 1].

36 / 65

In more detail: Thanks Luis Rodriguez Oracle Master 2012First, we define the following sets with sorted elements

Less = 〈x1, ..., xk |xi < key, i = 1, .., k〉Greater = 〈x1, ..., xm |xj > key, j = 1, .., m〉I = elements still not compared to the key

InitializationWe have A[1...1] with only one element ⇒ it is sorted

MaintanenceBefore we enter to the inner while loop, we have

1 A[1..j − 1] an already sorted array2 Less = ∅3 Greater = ∅4 I = A[1...j − 1].

36 / 65

In more detail: Thanks Luis Rodriguez Oracle Master 2012First, we define the following sets with sorted elements

Less = 〈x1, ..., xk |xi < key, i = 1, .., k〉Greater = 〈x1, ..., xm |xj > key, j = 1, .., m〉I = elements still not compared to the key

InitializationWe have A[1...1] with only one element ⇒ it is sorted

MaintanenceBefore we enter to the inner while loop, we have

1 A[1..j − 1] an already sorted array2 Less = ∅3 Greater = ∅4 I = A[1...j − 1].

36 / 65

In more detail: Thanks Luis Rodriguez Oracle Master 2012First, we define the following sets with sorted elements

Less = 〈x1, ..., xk |xi < key, i = 1, .., k〉Greater = 〈x1, ..., xm |xj > key, j = 1, .., m〉I = elements still not compared to the key

InitializationWe have A[1...1] with only one element ⇒ it is sorted

MaintanenceBefore we enter to the inner while loop, we have

1 A[1..j − 1] an already sorted array2 Less = ∅3 Greater = ∅4 I = A[1...j − 1].

36 / 65

In more detail: Thanks Luis Rodriguez Oracle Master 2012First, we define the following sets with sorted elements

Less = 〈x1, ..., xk |xi < key, i = 1, .., k〉Greater = 〈x1, ..., xm |xj > key, j = 1, .., m〉I = elements still not compared to the key

InitializationWe have A[1...1] with only one element ⇒ it is sorted

MaintanenceBefore we enter to the inner while loop, we have

1 A[1..j − 1] an already sorted array2 Less = ∅3 Greater = ∅4 I = A[1...j − 1].

36 / 65

In more detail: Thanks Luis Rodriguez Oracle Master 2012First, we define the following sets with sorted elements

Less = 〈x1, ..., xk |xi < key, i = 1, .., k〉Greater = 〈x1, ..., xm |xj > key, j = 1, .., m〉I = elements still not compared to the key

InitializationWe have A[1...1] with only one element ⇒ it is sorted

MaintanenceBefore we enter to the inner while loop, we have

1 A[1..j − 1] an already sorted array2 Less = ∅3 Greater = ∅4 I = A[1...j − 1].

36 / 65

In more detail: Thanks Luis Rodriguez Oracle Master 2012First, we define the following sets with sorted elements

Less = 〈x1, ..., xk |xi < key, i = 1, .., k〉Greater = 〈x1, ..., xm |xj > key, j = 1, .., m〉I = elements still not compared to the key

InitializationWe have A[1...1] with only one element ⇒ it is sorted

MaintanenceBefore we enter to the inner while loop, we have

1 A[1..j − 1] an already sorted array2 Less = ∅3 Greater = ∅4 I = A[1...j − 1].

36 / 65

In more detail: Thanks Luis Rodriguez Oracle Master 2012First, we define the following sets with sorted elements

Less = 〈x1, ..., xk |xi < key, i = 1, .., k〉Greater = 〈x1, ..., xm |xj > key, j = 1, .., m〉I = elements still not compared to the key

InitializationWe have A[1...1] with only one element ⇒ it is sorted

MaintanenceBefore we enter to the inner while loop, we have

1 A[1..j − 1] an already sorted array2 Less = ∅3 Greater = ∅4 I = A[1...j − 1].

36 / 65

In more detail: Thanks Luis Rodriguez Oracle Master 2012First, we define the following sets with sorted elements

Less = 〈x1, ..., xk |xi < key, i = 1, .., k〉Greater = 〈x1, ..., xm |xj > key, j = 1, .., m〉I = elements still not compared to the key

InitializationWe have A[1...1] with only one element ⇒ it is sorted

MaintanenceBefore we enter to the inner while loop, we have

1 A[1..j − 1] an already sorted array2 Less = ∅3 Greater = ∅4 I = A[1...j − 1].

36 / 65

Then

Case IYou never enter in the inner loop, thus A[j − 1] < key ⇒Less = A[1..j − 1], thus A[1..j] is a sorted array.

Case IIYou entered the inner while loop. Thus at each iteration we have thefollowing structure A[1...j] = I A[i] Greater whereGreater = 〈A[i], A[i + 1], · · · , A[j − 1]〉.

Note: I and Greater are sorted such that A[1...j] is sorted by itselfat this moment in the inner loop

37 / 65

Then

Case IYou never enter in the inner loop, thus A[j − 1] < key ⇒Less = A[1..j − 1], thus A[1..j] is a sorted array.

Case IIYou entered the inner while loop. Thus at each iteration we have thefollowing structure A[1...j] = I A[i] Greater whereGreater = 〈A[i], A[i + 1], · · · , A[j − 1]〉.

Note: I and Greater are sorted such that A[1...j] is sorted by itselfat this moment in the inner loop

37 / 65

Now

Thus, we get out of the inner loop once I = ∅.We have that A[1...j] = Less A[i + 1] Greater , whereA[i + 2] == A[i + 1].Thus, A[1...j] is sorted before inserting the key into the positionA[i + 1].Then, because elements of A[1...j] are sorted, we have that afterinserting the key at position i + 1 in A[1...j] that the array is stillsorted after iteration j.

38 / 65

Now

Thus, we get out of the inner loop once I = ∅.We have that A[1...j] = Less A[i + 1] Greater , whereA[i + 2] == A[i + 1].Thus, A[1...j] is sorted before inserting the key into the positionA[i + 1].Then, because elements of A[1...j] are sorted, we have that afterinserting the key at position i + 1 in A[1...j] that the array is stillsorted after iteration j.

38 / 65

Now

Thus, we get out of the inner loop once I = ∅.We have that A[1...j] = Less A[i + 1] Greater , whereA[i + 2] == A[i + 1].Thus, A[1...j] is sorted before inserting the key into the positionA[i + 1].Then, because elements of A[1...j] are sorted, we have that afterinserting the key at position i + 1 in A[1...j] that the array is stillsorted after iteration j.

38 / 65

Finally, Termination

TerminationOnce j > length(A), we get out of the outer loop and j = n + 1.Then, using the maintenance procedure we have that the subarrayA [1...n] is sorted as we wanted

39 / 65

Finally, Termination

TerminationOnce j > length(A), we get out of the outer loop and j = n + 1.Then, using the maintenance procedure we have that the subarrayA [1...n] is sorted as we wanted

39 / 65

Actually

This is known asLoop Invariance!!!

Why is this important?A computational system that can compute every Turing-computablefunction is called Turing complete (or Turing powerful).

PropertiesA Turing-complete system is called Turing equivalent if every function itcan compute is also Turing computable.

It computes precisely the same class of functions as do Turingmachines.

40 / 65

Actually

This is known asLoop Invariance!!!

Why is this important?A computational system that can compute every Turing-computablefunction is called Turing complete (or Turing powerful).

PropertiesA Turing-complete system is called Turing equivalent if every function itcan compute is also Turing computable.

It computes precisely the same class of functions as do Turingmachines.

40 / 65

Actually

This is known asLoop Invariance!!!

Why is this important?A computational system that can compute every Turing-computablefunction is called Turing complete (or Turing powerful).

PropertiesA Turing-complete system is called Turing equivalent if every function itcan compute is also Turing computable.

It computes precisely the same class of functions as do Turingmachines.

40 / 65

Actually

This is known asLoop Invariance!!!

Why is this important?A computational system that can compute every Turing-computablefunction is called Turing complete (or Turing powerful).

PropertiesA Turing-complete system is called Turing equivalent if every function itcan compute is also Turing computable.

It computes precisely the same class of functions as do Turingmachines.

40 / 65

Recursion ≡ Iteration

ThenSince you can build a Turing complete language using strictly iterativestructures and a Turning complete language using only recursivestructures, then the two are therefore equivalent.

Proof From Lambda CalculusAssume languages IT (with Iterative constructs only) and REC (withRecursive constructs only).Simulate a universal Turing machine using IT, then simulate auniversal Turing machine using REC.The existence of the simulator programs guarantees that both IT andREC can calculate all the computable functions.

41 / 65

Recursion ≡ Iteration

ThenSince you can build a Turing complete language using strictly iterativestructures and a Turning complete language using only recursivestructures, then the two are therefore equivalent.

Proof From Lambda CalculusAssume languages IT (with Iterative constructs only) and REC (withRecursive constructs only).Simulate a universal Turing machine using IT, then simulate auniversal Turing machine using REC.The existence of the simulator programs guarantees that both IT andREC can calculate all the computable functions.

41 / 65

Recursion ≡ Iteration

ThenSince you can build a Turing complete language using strictly iterativestructures and a Turning complete language using only recursivestructures, then the two are therefore equivalent.

Proof From Lambda CalculusAssume languages IT (with Iterative constructs only) and REC (withRecursive constructs only).Simulate a universal Turing machine using IT, then simulate auniversal Turing machine using REC.The existence of the simulator programs guarantees that both IT andREC can calculate all the computable functions.

41 / 65

Recursion ≡ Iteration

ThenSince you can build a Turing complete language using strictly iterativestructures and a Turning complete language using only recursivestructures, then the two are therefore equivalent.

Proof From Lambda CalculusAssume languages IT (with Iterative constructs only) and REC (withRecursive constructs only).Simulate a universal Turing machine using IT, then simulate auniversal Turing machine using REC.The existence of the simulator programs guarantees that both IT andREC can calculate all the computable functions.

41 / 65

Nevertheless

ImportantWe use iterative procedures, when we begin to solve new problems sowe can understand them.Then, we move everything to iterative for speed!!!

42 / 65

Nevertheless

ImportantWe use iterative procedures, when we begin to solve new problems sowe can understand them.Then, we move everything to iterative for speed!!!

42 / 65

Big O, Big Ω and Θ

1 O UpperBound

2 Ω LowerBound

3 Θ TightBound

4 Theoremabout Θ

DefinitionFor a given function g(n)

O(g(n)) = f (n)| There exists c > 0 and n0 > 0s.t. 0 ≤ f (n) ≤ cg(n) ∀n ≥ n0

Examplecg(n)

f(n)

43 / 65

Big O, Big Ω and Θ

1 O UpperBound

2 Ω LowerBound

3 Θ TightBound

4 Theoremabout Θ

DefinitionFor a given function g(n)

Ω(g(n)) = f (n)| There exists c > 0 and n0 > 0s.t. 0 ≤ cg(n) ≤ f (n) ∀n ≥ n0

Examplecg(n)f(n)

43 / 65

Big O, Big Ω and Θ

1 O UpperBound

2 Ω LowerBound

3 Θ TightBound

4 Theoremabout Θ

DefinitionFor a given function g(n)

Θ(g(n)) = f (n)| There exists c1 > 0, c2 > 0 and n0 > 0s.t. 0 ≤ c1g(n) ≤ f (n) ≤ c2g(n) ∀n ≥ n0

Examplec1g(n)f(n)c2g(n)

43 / 65

Big O, Big Ω and Θ

1 O UpperBound

2 Ω LowerBound

3 Θ TightBound

4 Theoremabout Θ

TheoremFor any two functions f (n) and g(n), we have thatf (n) = Θ(g(n)) if and only if f (n) = O(g(n)) andf (n) = Ω(g(n)).

43 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

44 / 65

Can we relate this with practical examples?

You could sayThis is too theoretical!

However, this is not the case!!Look at this java code...

45 / 65

Can we relate this with practical examples?

You could sayThis is too theoretical!

However, this is not the case!!Look at this java code...

45 / 65

Example: Step count of Insertion Sort in JavaCounting when A.length = n

// Sor t A assume i s f u l lp u b l i c i n t [ ] I n s e r t i o n S o r t ( i n t [ ] A) Step// I n i t i a l V a r i a b l e s 0i n t B [ ] = new i n t [A . l e n g t h ] ; 1i n t s i z e = 1 ; 1i n t i , j , t ; 1// I n i t i a l i z e the Array B 0B[0]=A [ 0 ] ; 1f o r ( i = 1 ; i < A. l e n g t h ; i ++) n

t = A[ i ] ; n−1f o r ( j=s i z e −1;

j>=0&&t<B[ j ] ; j−−) i +1

// s h i f t to the r i g h t 0B[ j +1]=B[ j ] ; i

B [ j +1]=t ; n−1s i z e ++; n−1

r e t u r n B ; 1

46 / 65

The Result

Step count for body of for loop is

6 + 3 (n − 1) + n +n−1∑i=1

(i + 1) +n−1∑j=1

(i) (13)

The summationThey have the quadratic terms n2.

ComplexityInsertion sort complexity is O

(n2)

47 / 65

The Result

Step count for body of for loop is

6 + 3 (n − 1) + n +n−1∑i=1

(i + 1) +n−1∑j=1

(i) (13)

The summationThey have the quadratic terms n2.

ComplexityInsertion sort complexity is O

(n2)

47 / 65

The Result

Step count for body of for loop is

6 + 3 (n − 1) + n +n−1∑i=1

(i + 1) +n−1∑j=1

(i) (13)

The summationThey have the quadratic terms n2.

ComplexityInsertion sort complexity is O

(n2)

47 / 65

What does this means for insertion sort?

We have

6 + 3 (n − 1) + n +n−1∑i=1

(i + 1) +n−1∑j=1

(i) = ...

3 + 4n + n (n − 1)2 + n − 1 + n (n − 1)

2 = ...

2 + 5n + n(n − 1) = ...

n2 + 4n + 2 ≤ n2 + 4n2 + 2n2

Thusn2 + 4n + 2 ≤ 7n2 (14)

With Tinsertion(n) = n2 + 4n + 2 describing the number of steps forinsertion when we have n numbers.

48 / 65

What does this means for insertion sort?

We have

6 + 3 (n − 1) + n +n−1∑i=1

(i + 1) +n−1∑j=1

(i) = ...

3 + 4n + n (n − 1)2 + n − 1 + n (n − 1)

2 = ...

2 + 5n + n(n − 1) = ...

n2 + 4n + 2 ≤ n2 + 4n2 + 2n2

Thusn2 + 4n + 2 ≤ 7n2 (14)

With Tinsertion(n) = n2 + 4n + 2 describing the number of steps forinsertion when we have n numbers.

48 / 65

What does this means for insertion sort?

We have

6 + 3 (n − 1) + n +n−1∑i=1

(i + 1) +n−1∑j=1

(i) = ...

3 + 4n + n (n − 1)2 + n − 1 + n (n − 1)

2 = ...

2 + 5n + n(n − 1) = ...

n2 + 4n + 2 ≤ n2 + 4n2 + 2n2

Thusn2 + 4n + 2 ≤ 7n2 (14)

With Tinsertion(n) = n2 + 4n + 2 describing the number of steps forinsertion when we have n numbers.

48 / 65

What does this means for insertion sort?

We have

6 + 3 (n − 1) + n +n−1∑i=1

(i + 1) +n−1∑j=1

(i) = ...

3 + 4n + n (n − 1)2 + n − 1 + n (n − 1)

2 = ...

2 + 5n + n(n − 1) = ...

n2 + 4n + 2 ≤ n2 + 4n2 + 2n2

Thusn2 + 4n + 2 ≤ 7n2 (14)

With Tinsertion(n) = n2 + 4n + 2 describing the number of steps forinsertion when we have n numbers.

48 / 65

What does this means for insertion sort?

We have

6 + 3 (n − 1) + n +n−1∑i=1

(i + 1) +n−1∑j=1

(i) = ...

3 + 4n + n (n − 1)2 + n − 1 + n (n − 1)

2 = ...

2 + 5n + n(n − 1) = ...

n2 + 4n + 2 ≤ n2 + 4n2 + 2n2

Thusn2 + 4n + 2 ≤ 7n2 (14)

With Tinsertion(n) = n2 + 4n + 2 describing the number of steps forinsertion when we have n numbers.

48 / 65

Actually

For n0 = 2

22 + 4× 2 + 2 = 14 < 7× 22 = 28 (15)

Graphically

49 / 65

ActuallyFor n0 = 2

22 + 4× 2 + 2 = 14 < 7× 22 = 28 (15)

Graphically

49 / 65

Meaning

FirstTime or number of operations does not exceed cn2 for a constant c onany input of size n (n suitably large).

QuestionsIs O(n2) too much time?Is the algorithm practical?

For this imagine, we have a machine able to make109instructions/seconds

50 / 65

Meaning

FirstTime or number of operations does not exceed cn2 for a constant c onany input of size n (n suitably large).

QuestionsIs O(n2) too much time?Is the algorithm practical?

For this imagine, we have a machine able to make109instructions/seconds

50 / 65

Meaning

FirstTime or number of operations does not exceed cn2 for a constant c onany input of size n (n suitably large).

QuestionsIs O(n2) too much time?Is the algorithm practical?

For this imagine, we have a machine able to make109instructions/seconds

50 / 65

Then

We have the followingn n n log n n2 n3 n4

1000 1 micros 10 micros 1 milis 1 second 17 minutes10,000 10 micros 130 micros 100 milis 17 minutes 116 days

106 1 milis 20 milis 17 minutes 32 years 3× 107years

It is much worse

51 / 65

Then

We have the followingn n n log n n2 n3 n4

1000 1 micros 10 micros 1 milis 1 second 17 minutes10,000 10 micros 130 micros 100 milis 17 minutes 116 days

106 1 milis 20 milis 17 minutes 32 years 3× 107years

It is much worsen n10 2n

1000 3.2× 1013 years 3.2× 10283 years10,000 ??? ???

106 ????? ?????The Reign of the Non Polynomial Algorithms

51 / 65

ASYMPTOTIC NOTATION , THE “LITTLE BOUNDS”

1 “little o”Bound.

2 “little ω”Bound.

DefinitionFor a given function g(n)

o(g(n)) = f (n)| For any c > 0 there exists n0 > 0s.t. 0 ≤ f (n) < cg(n) ∀n ≥ n0

ObservationsIt is not tight.Under the definition, we have for any f (n) ∈ o(g(n))

limn→∞

f (n)g(n) = 0

52 / 65

ASYMPTOTIC NOTATION , THE “LITTLE BOUNDS”

1 “little o”Bound.

2 “little ω”Bound.

DefinitionFor a given function g(n)

ω(g(n)) = f (n)| For any c > 0 there exists n0 > 0 s.t.0 ≤ cg(n) < f (n) ∀n ≥ n0

ObservationsIt is not tight.Under the definition, we have for any f (n) ∈ ω(g(n))

limn→∞

f (n)g(n) =∞

52 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

53 / 65

Interpretation

1 How do you interpretn = O(n2)

2 How do you interpret2n2 + 3n + 1 =2n2 + Θ(n).

InterpretationIt means n belongs to O(n2)

54 / 65

Interpretation

1 How do you interpretn = O(n2)

2 How do you interpret2n2 + 3n + 1 =2n2 + Θ(n).

Interpretation∃f (n) ∈ Θ(n) such that

2n2 + 3n + 1 = 2n2 + f (n)= 2n2 + Θ(n)

54 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

55 / 65

Properties

1 Transitivity2 Reflexitibity3 Symmetry4 Transpose Symmetry.

Propertyf (n) = Θ(g(n)) and g(n) = Θ(h(n)) thenf (n) = Θ(h(n))

56 / 65

Properties

1 Transitivity2 Reflexitibity3 Symmetry4 Transpose Symmetry.

Propertyf (n) = Θ(f (n))

56 / 65

Properties

1 Transitivity2 Reflexitibity3 Symmetry4 Transpose Symmetry.

Propertyf (n) = Θ(g(n))⇐⇒ g(n) = Θ(f (n))

56 / 65

Properties

1 Transitivity2 Reflexitibity3 Symmetry4 Transpose Symmetry.

Propertyf (n) = O(g(n))⇐⇒ g(n) = Ω(f (n))

56 / 65

Ok, we have the basics...

Now...What do we do?

We will look at methods to solve recursions!!!1 Substitution Method2 Recursion-Tree Method3 Master Method

57 / 65

Ok, we have the basics...

Now...What do we do?

We will look at methods to solve recursions!!!1 Substitution Method2 Recursion-Tree Method3 Master Method

57 / 65

Ok, we have the basics...

Now...What do we do?

We will look at methods to solve recursions!!!1 Substitution Method2 Recursion-Tree Method3 Master Method

57 / 65

Ok, we have the basics...

Now...What do we do?

We will look at methods to solve recursions!!!1 Substitution Method2 Recursion-Tree Method3 Master Method

57 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

58 / 65

The Substitution Method

1 Substitution Method.2 Making a Good Guess.3 Guess4 Solve by substitution and

induction.5 Subtleties

The MethodGuess the form of the solution.Use mathematical induction to findthe constants and show that thesolution works.

59 / 65

The Substitution Method

1 Substitution Method.2 Making a Good Guess.3 Guess4 Solve by substitution and

induction.5 Subtleties

The MethodSolve the recurrence

T(n) = 2T(⌊n

2

⌋)+ n

59 / 65

The Substitution Method

1 Substitution Method.2 Making a Good Guess.3 Guess4 Solve by substitution and

induction.5 Subtleties

The MethodGuess that T(n) = O(n log n)

59 / 65

The Substitution Method

1 Substitution Method.2 Making a Good Guess.3 Guess4 Solve by substitution and

induction.5 Subtleties

InductionWe assume that the bound holds for

⌊n2

⌋,

T(⌊n

2

⌋)≤ c

⌊n2

⌋log(⌊n

2

⌋),

substitute

T(n) ≤ c⌊n

2

⌋log(⌊n

2

⌋)+ n

≤ cn log(n

2

)+ n

≤ cn log n − cn + n≤ cn log n

as long c ≥ 1

59 / 65

The Substitution Method

1 Substitution Method.2 Making a Good Guess.3 Guess4 Solve by substitution and

induction.5 Subtleties

SubtletiesConsider the recurrence

T(n) = T(⌊n

2

⌋)+ T

(⌈n2

⌉)+ 1

59 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

60 / 65

The Recursion-Tree Method

SurpriseSometimes is hard to do a good guess. For example T (n) = 3T

(n4)

+ cn2

Therefore, we draw the recursion tree

61 / 65

The Recursion-Tree MethodSurpriseSometimes is hard to do a good guess. For example T (n) = 3T

(n4)

+ cn2

Therefore, we draw the recursion tree

61 / 65

The Recursion-Tree MethodCounting Again!!!

A subproblem for a node at depth i is n/4i , then once n/4i = 1⇒ i = log4 n

At each level i = 0, 1, 2, ..., log4 n − 1 the cost of each node is c(

n4i

)2

At each level i = 0, 1, 2, ..., log4 n − 1 the total cost of the work is 3ic(

n4i

)2=(

316

)icn2

At depth log4 n, we have this many nodes 3log4 n = nlog4 3

Then, we have that

T(n) =log4n−1∑

i=0

( 316

)icn2 + nlog4 3

<

∞∑i=0

( 316

)icn2 + nlog4 3

=1

1− (3/16)cn2 + nlog4 3

=O(n2)

62 / 65

The Recursion-Tree MethodCounting Again!!!

A subproblem for a node at depth i is n/4i , then once n/4i = 1⇒ i = log4 n

At each level i = 0, 1, 2, ..., log4 n − 1 the cost of each node is c(

n4i

)2

At each level i = 0, 1, 2, ..., log4 n − 1 the total cost of the work is 3ic(

n4i

)2=(

316

)icn2

At depth log4 n, we have this many nodes 3log4 n = nlog4 3

Then, we have that

T(n) =log4n−1∑

i=0

( 316

)icn2 + nlog4 3

<

∞∑i=0

( 316

)icn2 + nlog4 3

=1

1− (3/16)cn2 + nlog4 3

=O(n2)

62 / 65

The Recursion-Tree MethodCounting Again!!!

A subproblem for a node at depth i is n/4i , then once n/4i = 1⇒ i = log4 n

At each level i = 0, 1, 2, ..., log4 n − 1 the cost of each node is c(

n4i

)2

At each level i = 0, 1, 2, ..., log4 n − 1 the total cost of the work is 3ic(

n4i

)2=(

316

)icn2

At depth log4 n, we have this many nodes 3log4 n = nlog4 3

Then, we have that

T(n) =log4n−1∑

i=0

( 316

)icn2 + nlog4 3

<

∞∑i=0

( 316

)icn2 + nlog4 3

=1

1− (3/16)cn2 + nlog4 3

=O(n2)

62 / 65

The Recursion-Tree MethodCounting Again!!!

A subproblem for a node at depth i is n/4i , then once n/4i = 1⇒ i = log4 n

At each level i = 0, 1, 2, ..., log4 n − 1 the cost of each node is c(

n4i

)2

At each level i = 0, 1, 2, ..., log4 n − 1 the total cost of the work is 3ic(

n4i

)2=(

316

)icn2

At depth log4 n, we have this many nodes 3log4 n = nlog4 3

Then, we have that

T(n) =log4n−1∑

i=0

( 316

)icn2 + nlog4 3

<

∞∑i=0

( 316

)icn2 + nlog4 3

=1

1− (3/16)cn2 + nlog4 3

=O(n2)

62 / 65

The Recursion-Tree MethodCounting Again!!!

A subproblem for a node at depth i is n/4i , then once n/4i = 1⇒ i = log4 n

At each level i = 0, 1, 2, ..., log4 n − 1 the cost of each node is c(

n4i

)2

At each level i = 0, 1, 2, ..., log4 n − 1 the total cost of the work is 3ic(

n4i

)2=(

316

)icn2

At depth log4 n, we have this many nodes 3log4 n = nlog4 3

Then, we have that

T(n) =log4n−1∑

i=0

( 316

)icn2 + nlog4 3

<

∞∑i=0

( 316

)icn2 + nlog4 3

=1

1− (3/16)cn2 + nlog4 3

=O(n2)

62 / 65

The Recursion-Tree MethodCounting Again!!!

A subproblem for a node at depth i is n/4i , then once n/4i = 1⇒ i = log4 n

At each level i = 0, 1, 2, ..., log4 n − 1 the cost of each node is c(

n4i

)2

At each level i = 0, 1, 2, ..., log4 n − 1 the total cost of the work is 3ic(

n4i

)2=(

316

)icn2

At depth log4 n, we have this many nodes 3log4 n = nlog4 3

Then, we have that

T(n) =log4n−1∑

i=0

( 316

)icn2 + nlog4 3

<

∞∑i=0

( 316

)icn2 + nlog4 3

=1

1− (3/16)cn2 + nlog4 3

=O(n2)

62 / 65

The Recursion-Tree MethodCounting Again!!!

A subproblem for a node at depth i is n/4i , then once n/4i = 1⇒ i = log4 n

At each level i = 0, 1, 2, ..., log4 n − 1 the cost of each node is c(

n4i

)2

At each level i = 0, 1, 2, ..., log4 n − 1 the total cost of the work is 3ic(

n4i

)2=(

316

)icn2

At depth log4 n, we have this many nodes 3log4 n = nlog4 3

Then, we have that

T(n) =log4n−1∑

i=0

( 316

)icn2 + nlog4 3

<

∞∑i=0

( 316

)icn2 + nlog4 3

=1

1− (3/16)cn2 + nlog4 3

=O(n2)

62 / 65

The Recursion-Tree MethodCounting Again!!!

A subproblem for a node at depth i is n/4i , then once n/4i = 1⇒ i = log4 n

At each level i = 0, 1, 2, ..., log4 n − 1 the cost of each node is c(

n4i

)2

At each level i = 0, 1, 2, ..., log4 n − 1 the total cost of the work is 3ic(

n4i

)2=(

316

)icn2

At depth log4 n, we have this many nodes 3log4 n = nlog4 3

Then, we have that

T(n) =log4n−1∑

i=0

( 316

)icn2 + nlog4 3

<

∞∑i=0

( 316

)icn2 + nlog4 3

=1

1− (3/16)cn2 + nlog4 3

=O(n2)

62 / 65

Outline1 Divide and Conquer: The Holy Grail!!

IntroductionSplit problems into smaller onesUsing Induction to prove algorithm correctness

2 Using Induction to prove Algorithm CorrectnessGiven an Algorithm

3 Asymptotic NotationRelation with step count

4 Examples of ComplexitiesInterpreting the NotationProperties

5 Method to Solve RecursionsSubstitution MethodThe Recursion-Tree MethodThe Master Method

63 / 65

The Master Theorem

Theorem - Cookbook for solving T (n) = aT(

nb

)+ f (n)

Let a ≥ 1 and b > 1 be constants, let f (n) be a function, and let T (n) be defined onthe non-negative integers by the recurrence

T(n) = aT(n

b

)+ f (n) (16)

where we interpret nb as

⌊nb

⌋or⌈

nb

⌉. Then T (n) can be bounded asymptotically as

follows:1 If f (n) = O

(nlogb a−ε) for some constant ε > 0. Then T (n) = Θ

(nlogb a) .

2 If f (n) = Θ(nlogb a), then T (n) = Θ

(nlogb a lg n

).

3 If f (n) = Ω(nlogb a+ε) for some constant ε > 0 and if af

(nb

)≤ cf (n) for some

c < 1 and all sufficiently large n, then T (n) = Θ (f (n)) .

64 / 65

The Master Theorem

Theorem - Cookbook for solving T (n) = aT(

nb

)+ f (n)

Let a ≥ 1 and b > 1 be constants, let f (n) be a function, and let T (n) be defined onthe non-negative integers by the recurrence

T(n) = aT(n

b

)+ f (n) (16)

where we interpret nb as

⌊nb

⌋or⌈

nb

⌉. Then T (n) can be bounded asymptotically as

follows:1 If f (n) = O

(nlogb a−ε) for some constant ε > 0. Then T (n) = Θ

(nlogb a) .

2 If f (n) = Θ(nlogb a), then T (n) = Θ

(nlogb a lg n

).

3 If f (n) = Ω(nlogb a+ε) for some constant ε > 0 and if af

(nb

)≤ cf (n) for some

c < 1 and all sufficiently large n, then T (n) = Θ (f (n)) .

64 / 65

The Master Theorem

Theorem - Cookbook for solving T (n) = aT(

nb

)+ f (n)

Let a ≥ 1 and b > 1 be constants, let f (n) be a function, and let T (n) be defined onthe non-negative integers by the recurrence

T(n) = aT(n

b

)+ f (n) (16)

where we interpret nb as

⌊nb

⌋or⌈

nb

⌉. Then T (n) can be bounded asymptotically as

follows:1 If f (n) = O

(nlogb a−ε) for some constant ε > 0. Then T (n) = Θ

(nlogb a) .

2 If f (n) = Θ(nlogb a), then T (n) = Θ

(nlogb a lg n

).

3 If f (n) = Ω(nlogb a+ε) for some constant ε > 0 and if af

(nb

)≤ cf (n) for some

c < 1 and all sufficiently large n, then T (n) = Θ (f (n)) .

64 / 65

The Master Theorem

Theorem - Cookbook for solving T (n) = aT(

nb

)+ f (n)

Let a ≥ 1 and b > 1 be constants, let f (n) be a function, and let T (n) be defined onthe non-negative integers by the recurrence

T(n) = aT(n

b

)+ f (n) (16)

where we interpret nb as

⌊nb

⌋or⌈

nb

⌉. Then T (n) can be bounded asymptotically as

follows:1 If f (n) = O

(nlogb a−ε) for some constant ε > 0. Then T (n) = Θ

(nlogb a) .

2 If f (n) = Θ(nlogb a), then T (n) = Θ

(nlogb a lg n

).

3 If f (n) = Ω(nlogb a+ε) for some constant ε > 0 and if af

(nb

)≤ cf (n) for some

c < 1 and all sufficiently large n, then T (n) = Θ (f (n)) .

64 / 65

The Master Theorem

Theorem - Cookbook for solving T (n) = aT(

nb

)+ f (n)

Let a ≥ 1 and b > 1 be constants, let f (n) be a function, and let T (n) be defined onthe non-negative integers by the recurrence

T(n) = aT(n

b

)+ f (n) (16)

where we interpret nb as

⌊nb

⌋or⌈

nb

⌉. Then T (n) can be bounded asymptotically as

follows:1 If f (n) = O

(nlogb a−ε) for some constant ε > 0. Then T (n) = Θ

(nlogb a) .

2 If f (n) = Θ(nlogb a), then T (n) = Θ

(nlogb a lg n

).

3 If f (n) = Ω(nlogb a+ε) for some constant ε > 0 and if af

(nb

)≤ cf (n) for some

c < 1 and all sufficiently large n, then T (n) = Θ (f (n)) .

64 / 65

Using the Master Theorem

1 Consider...2 We have...3 Then...4 We use then the

case 1 of theMaster Theorem

The MethodT(n) = 9T

(n3

)+ n

65 / 65

Using the Master Theorem

1 Consider...2 We have...3 Then...4 We use then the

case 1 of theMaster Theorem

The Methoda = 9, b = 3 and f (n) = n

65 / 65

Using the Master Theorem

1 Consider...2 We have...3 Then...4 We use then the

case 1 of theMaster Theorem

The Methodnlog3 9 = Θ(n2) and f (n) = O(nlog3 9−ε) with ε = 1

65 / 65

Using the Master Theorem

1 Consider...2 We have...3 Then...4 We use then the

case 1 of theMaster Theorem

FinallyT(n) = Θ(n2).

65 / 65

top related