recursion ac

44
Recursion ac From Wikipedia, the free encyclopedia

Upload: man

Post on 12-Jan-2016

247 views

Category:

Documents


1 download

DESCRIPTION

1. From Wikipedia, the free encyclopedia2. Lexicographical order

TRANSCRIPT

Page 1: Recursion Ac

Recursion acFrom Wikipedia, the free encyclopedia

Page 2: Recursion Ac

Contents

1 Anonymous recursion 11.1 Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2.1 Named functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 Passing functions as arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.1 JavaScript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.2 Perl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Bar recursion 42.1 Technical Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3 Corecursion 53.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3.1.1 Factorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.1.2 Fibonacci sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.1.3 Tree traversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.4 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4 Course-of-values recursion 114.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.2 Equivalence to primitive recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.3 Application to primitive recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

5 Recursion 145.1 Formal definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

i

Page 3: Recursion Ac

ii CONTENTS

5.2 Informal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155.3 In language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

5.3.1 Recursive humor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165.4 In mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

5.4.1 Recursively defined sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165.4.2 Finite subdivision rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.4.3 Functional recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.4.4 Proofs involving recursive definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.4.5 Recursive optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5.5 In computer science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175.6 In art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.7 The recursion theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5.7.1 Proof of uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.7.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.9 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

6 Recursion (computer science) 256.1 Recursive functions and algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256.2 Recursive data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

6.2.1 Inductively defined data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.2.2 Coinductively defined data and corecursion . . . . . . . . . . . . . . . . . . . . . . . . . . 27

6.3 Types of recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.3.1 Single recursion and multiple recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.3.2 Indirect recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.3.3 Anonymous recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.3.4 Structural versus generative recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

6.4 Recursive programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.4.1 Recursive procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.4.2 Recursive data structures (structural recursion) . . . . . . . . . . . . . . . . . . . . . . . . 31

6.5 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336.5.1 Wrapper function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336.5.2 Short-circuiting the base case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336.5.3 Hybrid algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

6.6 Recursion versus iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346.6.1 Expressive power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356.6.2 Performance issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356.6.3 Stack space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356.6.4 Multiply recursive problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

6.7 Tail-recursive functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Page 4: Recursion Ac

CONTENTS iii

6.8 Order of execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.8.1 Function 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.8.2 Function 2 with swapped lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

6.9 Time-efficiency of recursive algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.9.1 Shortcut rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

6.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.11 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386.14 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6.14.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.14.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406.14.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Page 5: Recursion Ac

Chapter 1

Anonymous recursion

In computer science, anonymous recursion is recursion which does not explicitly call a function by name. Thiscan be done either explicitly, by using a higher-order function – passing in a function as an argument and calling it– or implicitly, via reflection features which allow one to access certain functions depending on the current context,especially “the current function” or sometimes “the calling function of the current function”.In programming practice, anonymous recursion is notably used in JavaScript, which provides reflection facilitiesto support it. In general programming practice, however, this is considered poor style, and recursion with namedfunctions is suggested instead. Anonymous recursion via explicitly passing functions as arguments is possible in anylanguage that supports functions as arguments, though this is rarely used in practice, as it is longer and less clear thanexplicitly recursing by name.In theoretical computer science, anonymous recursion is important, as it shows that one can implement recursionwithout requiring named functions. This is particularly important for the lambda calculus, which has anonymousunary functions, but is able to compute any recursive function. This anonymous recursion can be produced genericallyvia fixed-point combinators.

1.1 Use

Anonymous recursion is primarily of use in allowing recursion for anonymous functions, particularly when they formclosures or are used as callbacks, to avoid having to bind the name of the function.Anonymous recursion primarily consists of calling “the current function”, which results in direct recursion. Anony-mous indirect recursion is possible, such as by calling “the caller (the previous function)", or, more rarely, by goingfurther up the call stack, and this can be chained to produce mutual recursion. The self-reference of “the currentfunction” is a functional equivalent of the "this" keyword in object-oriented programming, allowing one to refer tothe current context.Anonymous recursion can also be used for named functions, rather that calling them by name, say to specify thatone is recursing on the current function, or to allow one to rename the function without needing to change the namewhere it calls itself. However, as a matter of programming style this is generally not done.

1.2 Alternatives

1.2.1 Named functions

The usual alternative is to use named functions and named recursion. Given an anonymous function, this can bedone either by binding a name to the function, as in named function expressions in JavaScript, or by assigning thefunction to a variable and then calling the variable, as in function statements in JavaScript. Since languages thatallow anonymous functions generally allow assigning these functions to variables (if not first-class functions), manylanguages do not provide a way to refer to the function itself, and explicitly reject anonymous recursion; examplesinclude Go.[1]

1

Page 6: Recursion Ac

2 CHAPTER 1. ANONYMOUS RECURSION

For example, in JavaScript the factorial function can be defined via anonymous recursion as such:[2]

[1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : arguments.callee(n-1)*n; });

Rewritten to use a named function expression yields:[1,2,3,4,5].map(function factorial(n) { return (!(n>1))? 1 : factorial(n-1)*n; });

1.2.2 Passing functions as arguments

Even without mechanisms to refer to the current function or calling function, anonymous recursion is possible in alanguage that allows functions as arguments. This is done by adding another parameter to the basic recursive func-tion and using this parameter as the function for the recursive call. This creates a higher-order function, and passingthis higher function itself allows anonymous recursion within the actual recursive function. This can be done purelyanonymously by applying a fixed-point combinator to this higher order function. This is mainly of academic interest,particularly to show that the lambda calculus has recursion, as the resulting expression is significantly more compli-cated than the original named recursive function. Conversely, the use of fixed-pointed combinators may be genericallyreferred to as “anonymous recursion”, as this is a notable use of them, though they have other applications.[3][4]

This is illustrated below using Python. First, a standard named recursion:def fact(n): if n == 0: return 1 return n * fact(n - 1)

Using a higher-order function so the top-level function recurses anonymously on an argument, but still needing thestandard recursive function as an argument:def fact0(n0): if n0 == 0: return 1 return n0 * fact0(n0 - 1) fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(n1 - 1) fact= lambda n: fact1(fact0, n)

We can eliminate the standard recursive function by passing the function argument into the call:fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(f, n1 - 1) fact = lambda n: fact1(fact1, n)

The second line can be replaced by a generic higher-order function called a combinator:F = lambda f: (lambda x: f(f, x)) fact1 = lambda f, n1: 1 if n1 == 0 else n1 * f(f, n1 - 1) fact = F(fact1)

Written anonymously:[5]

(lambda f: (lambda x: f(f, x))) \ (lambda g, n1: 1 if n1 == 0 else n1 * g(g, n1 - 1))

In the lambda calculus, which only uses functions of a single variable, this can be done via the Y combinator. Firstmake the higher-order function of two variables be a function of a single variable, which directly returns a function,by currying:fact1 = lambda f: (lambda n1: 1 if n1 == 0 else n1 * f(f)(n1 - 1)) fact = fact1(fact1)

There are two “applying a higher order function to itself” operations here: f(f) in the first line and fact1(fact1) in thesecond. Factoring out the second double application into a combinator yields:C = lambda x: x(x) fact1 = lambda f: (lambda n1: 1 if n1 == 0 else n1 * f(f)(n1 - 1)) fact = C(fact1)

Factoring out the other double application yields:C = lambda x: x(x) D = lambda f: (lambda x: f(lambda v: x(x)(v))) fact1 = lambda g: (lambda n1: 1 if n1 == 0 elsen1 * g(n1 - 1)) fact = C(D(fact1))

Combining the two combinators into one yields the Y combinator:C = lambda x: x(x) D = lambda f: (lambda x: f(lambda v: x(x)(v))) Y = lambda y: C(D(y)) fact1 = lambda g:

Page 7: Recursion Ac

1.3. EXAMPLES 3

(lambda n1: 1 if n1 == 0 else n1 * g(n1 - 1)) fact = Y(fact1)

Expanding out the Y combinator yields:Y = lambda f: (lambda x: f(lambda v: x(x)(v))) \ (lambda x: f(lambda v: x(x)(v))) fact1 = lambda g: (lambda n1: 1if n1 == 0 else n1 * g(n1 - 1)) fact = Y(fact1)

Combining these yields a recursive definition of the factorial in lambda calculus (anonymous functions of a singlevariable):[6]

(lambda f: (lambda x: f(lambda v: x(x)(v))) (lambda x: f(lambda v: x(x)(v)))) \ (lambda g: (lambda n1: 1 if n1 ==0 else n1 * g(n1 - 1)))

1.3 Examples

1.3.1 JavaScript

In JavaScript, the current function is accessible via arguments.callee, while the calling function is accessible viaarguments.caller. These allow anonymous recursion, such as in this implementation of the factorial:[2]

[1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : arguments.callee(n-1)*n; });

1.3.2 Perl

Starting with Perl 5.16, the current subroutine is accessible via the __SUB__ token, which returns a reference tothe current subroutine, or undef outside a subroutine.[7] This allows anonymous recursion, such as in the followingimplementation of the factorial:use feature ":5.16"; sub { my $x = shift; $x == 0 ? 1 : $x * __SUB__->( $x - 1 ); }

1.4 References[1] Issue 226: It’s impossible to recurse a anonymous function in Go without workarounds.

[2] answer by olliej, Oct 25 '08 to "Why was the arguments.callee.caller property deprecated in JavaScript?", StackOverflow

[3] This terminology appear to be largely folklore, but it does appear in the following:

• Trey Nash, Accelerated C# 2008, Apress, 2007, ISBN 1-59059-873-3, p. 462—463. Derived substantially fromWes Dyer's blog (see next item).

• Wes Dyer Anonymous Recursion in C#, February 02, 2007, contains a substantially similar example found in thebook above, but accompanied by more discussion.

[4] The If Works Deriving the Y combinator, January 10th, 2008

[5] Hugo Walter’s answer to "Can a lambda function call itself recursively in Python?"

[6] Nux’s answer to "Can a lambda function call itself recursively in Python?"

[7] Perldoc, "The 'current_sub' feature", perldoc feature

Page 8: Recursion Ac

Chapter 2

Bar recursion

Bar recursion is a generalized form of recursion developed by C. Spector in his 1962 paper.[1] It is related to barinduction in the same fashion that primitive recursion is related to ordinary induction, or transfinite recursion is relatedto transfinite induction.

2.1 Technical Definition

Let V, R, and O be types, and i be any natural number, representing a sequence of parameters taken from V. Thenthe function sequence f of functions fn from Vi+n → R toO is defined by bar recursion from the functions Ln : R→O and B with Bn : ((Vi+n → R) x (Vn → R)) → O if:

• fn((λα:Vi+n)r) = Ln(r) for any r long enough that Ln₊k on any extension of r equals Ln. Assuming L is acontinuous sequence, there must be such r, because a continuous function can use only finitely much data.

• fn(p) = Bn(p, (λx:V)fn₊₁(cat(p, x))) for any p in Vi+n → R.

Here “cat” is the concatenation function, sending p, x to the sequence which starts with p, and has x as its last term.(This definition is based on the one by Escardó and Oliva.[2])Provided that for every sufficiently long function (λα)r of type Vi → R, there is some n with Ln(r) = Bn((λα)r,(λx:V)Ln₊₁(r)), the bar induction rule ensures that f is well-defined.The idea is that one extends the sequence arbitrarily, using the recursion term B to determine the effect, until asufficiently long node of the tree of sequences over V is reached; then the base term L determines the final value of f.The well-definedness condition corresponds to the requirement that every infinite path must eventually pass though asufficiently long node: the same requirement that is needed to invoke a bar induction.The principles of bar induction and bar recursion are the intuitionistic equivalents of the axiom of dependent choices.[3]

2.2 References[1] C. Spector (1962). “Provably recursive functionals of analysis: a consistency proof of analysis by an extension of principles

in current intuitionistic mathematics”. In F. D. E. Dekker. Recursive Function Theory: Proc. Symposia in Pure Mathematics5. American Mathematical Society. pp. 1–27.

[2] Martín Escardó, PauloOliva. “Selection functions, Bar recursion, andBackwards Induction”. Math. Struct. in Comp.Science.

[3] JeremyAvigad, Solomon Feferman (1999). “VI: Gödel’s functional (“Dialectica”) interpretation”. In S. R. Buss. Handbookof Proof Theory.

4

Page 9: Recursion Ac

Chapter 3

Corecursion

In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analyti-cally, starting on data further from a base case and breaking it down into smaller data and repeating until one reachesa base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing datafurther removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce,bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept isgenerative recursion which may lack a definite “direction” inherent in corecursion and recursion.Where recursion allows programs to operate on arbitrarily complex data, so long as they can be reduced to simple data(base cases), corecursion allows programs to produce arbitrarily complex and potentially infinite data structures, suchas streams, so long as it can be produced from simple data (base cases). Where recursion may not terminate, neverreaching a base state, corecursion starts from a base state, and thus produces subsequent steps deterministically, thoughit may proceed indefinitely (and thus not terminate under strict evaluation), or it may consume more than it producesand thus become non-productive. Many functions that are traditionally analyzed as recursive can alternatively, andarguably more naturally, be interpreted as corecursive functions that are terminated at a given stage, for examplerecurrence relations such as the factorial.Corecursion can produce both finite and infinite data structures as result, and may employ self-referential data struc-tures. Corecursion is often used in conjunction with lazy evaluation, to only produce a finite subset of a potentiallyinfinite structure (rather than trying to produce an entire infinite structure at once). Corecursion is a particularly im-portant concept in functional programming, where corecursion and codata allow total languages to work with infinitedata structures.

3.1 Examples

Corecursion can be understood by contrast with recursion, which is more familiar. While corecursion is primarilyof interest in functional programming, it can be illustrated using imperative programming, which is done below us-ing the generator facility in Python. In these examples local variables are used, and assigned values imperatively(destructively), though these are not necessary in corecursion in pure functional programming. In pure functionalprogramming, rather than assigning to local variables, these computed values form an invariable sequence, and priorvalues are accessed by self-reference (later values in the sequence reference earlier values in the sequence to be com-puted). The assignments simply express this in the imperative paradigm and explicitly specify where the computationshappen, which serves to clarify the exposition.

3.1.1 Factorial

Aclassic example of recursion is computing the factorial, which is defined recursively as 0! := 1 andn! := n×(n−1)!

To recursively compute its result on a given input, a recursive function calls (a copy of) itself with a different (“smaller”in some way) input and uses the result of this call to construct its result. The recursive call does the same, unless thebase case has been reached. Thus a call stack develops in the process. For example, to compute fac(3), this recursivelycalls in turn fac(2), fac(1), fac(0) (“winding up” the stack), at which point recursion terminates with fac(0) = 1, andthen the stack unwinds in reverse order and the results are calculated on the way back along the call stack to the initial

5

Page 10: Recursion Ac

6 CHAPTER 3. CORECURSION

call frame fac(3), where the final result is calculated as 3*2 =: 6 and finally returned. In this example a functionreturns a single value.This stack unwinding can be explicated, defining the factorial corecursively, as an iterator, where one starts with thecase of 1 =: 0! , then from this starting value constructs factorial values for increasing numbers 1, 2, 3... as in theabove recursive definition with “time arrow” reversed, as it were, by reading it backwards as n!× (n+1) =: (n+1)!. The corecursive algorithm thus defined produces a stream of all factorials. This may be concretely implementedas a generator. Symbolically, noting that computing next factorial value requires keeping track of both n and f (aprevious factorial value), this can be represented as:

n, f = (0, 1) : (n+ 1, f × (n+ 1))

or in Haskell,(\(n,f) -> (n+1, f*(n+1))) `iterate` (0,1)

meaning, “starting from n, f = 0, 1 , on each step the next values are calculated as n + 1, f × (n + 1) ". This ismathematically equivalent and almost identical to the recursive definition, but the +1 emphasizes that the factorialvalues are being built up, going forwards from the starting case, rather than being computed after first going back-wards, down to the base case, with a −1 decrement. Note also that the direct output of the corecursive functiondoes not simply contain the factorial n! values, but also includes for each value the auxiliary data of its index n in thesequence, so that any one specific result can be selected among them all, as and when needed.Note the connection with denotational semantics, where the denotations of recursive programs is built up corecursivelyin this way.In Python, a recursive factorial function can be defined as:[lower-alpha 1]

def factorial(n): if n == 0: return 1 else: return n * factorial(n - 1)

This could then be called for example as factorial(5) to compute 5!.A corresponding corecursive generator can be defined as:def factorials(): n, f = 0, 1 while True: yield f n, f = n + 1, f * (n + 1)

This generates an infinite stream of factorials in order; a finite portion of it can be produced by:def n_factorials(k): n, f = 0, 1 while n <= k: yield f n, f = n + 1, f * (n + 1)

This could then be called to produce the factorials up to 5! via:for f in n_factorials(5): print(f)

If we're only interested in a certain factorial, just the last value can be taken, or we can fuse the production and theaccess into one function,def nth_factorial(k): n, f = 0, 1 while n < k: n, f = n + 1, f * (n + 1) yield f

As can be readily seen here, this is practically equivalent (just by substituting return for the only yield there) to theaccumulator argument technique for tail recursion, unwound into an explicit loop. Thus it can be said that the conceptof corecursion is an explication of the embodiment of iterative computation processes by recursive definitions, whereapplicable.

3.1.2 Fibonacci sequence

In the same way, the Fibonacci sequence can be represented as:

a, b = (0, 1) : (b, a+ b)

Page 11: Recursion Ac

3.1. EXAMPLES 7

Note that because the Fibonacci sequence is a recurrence relation of order 2, the corecursive relation must track twosuccessive terms, with the (b,−) corresponding to shift forward by one step, and the (−, a + b) corresponding tocomputing the next term. This can then be implemented as follows (using parallel assignment):def fibonacci_sequence(): a, b = 0, 1 while True: yield a a, b = b, a + b

In Haskell,map fst ( (\(a,b) -> (b,a+b)) `iterate` (0,1) )

3.1.3 Tree traversal

Tree traversal via a depth-first approach is a classic example of recursion. Dually, breadth-first traversal can verynaturally be implemented via corecursion.Without using recursion or corecursion, one may traverse a tree by starting at the root node, placing the child nodesin a data structure, then removing the nodes in the data structure in turn and iterating over its children.[lower-alpha 2] Ifthe data structure is a stack (LIFO), this yields depth-first traversal, while if the data structure is a queue (FIFO), thisyields breadth-first traversal.Using recursion, a (post-order)[lower-alpha 3] depth-first traversal can be implemented by starting at the root node andrecursively traversing each child subtree in turn (the subtree based at each child node) – the second child subtree doesnot start processing until the first child subtree is finished. Once a leaf node is reached or the children of a branchnode have been exhausted, the node itself is visited (e.g., the value of the node itself is outputted). In this case, thecall stack (of the recursive functions) acts as the stack that is iterated over.Using corecursion, a breadth-first traversal can be implemented by starting at the root node, outputting its value,[lower-alpha 4]then breadth-first traversing the subtrees – i.e., passing on the whole list of subtrees to the next step (not a single sub-tree, as in the recursive approach) – at the next step outputting the value of all of their root nodes, then passing ontheir child subtrees, etc.[lower-alpha 5] In this case the generator function, indeed the output sequence itself, acts as thequeue. As in the factorial example (above), where the auxiliary information of the index (which step one was at, n)was pushed forward, in addition to the actual output of n!, in this case the auxiliary information of the remainingsubtrees is pushed forward, in addition to the actual output. Symbolically:v,t = ([], FullTree) : (RootValues, ChildTrees)meaning that at each step, one outputs the list of values of root nodes, then proceeds to the child subtrees. Generatingjust the node values from this sequence simply requires discarding the auxiliary child tree data, then flattening the listof lists (values are initially grouped by level (depth); flattening (ungrouping) yields a flat linear list).These can be compared as follows. The recursive traversal handles a leaf node (at the bottom) as the base case (whenthere are no children, just output the value), and analyzes a tree into subtrees, traversing each in turn, eventuallyresulting in just leaf nodes – actual leaf nodes, and branch nodes whose children have already been dealt with (cutoff below). By contrast, the corecursive traversal handles a root node (at the top) as the base case (given a node, firstoutput the value), treats a tree as being synthesized of a root node and its children, then produces as auxiliary outputa list of subtrees at each step, which are then the input for the next step – the child nodes of the original root arethe root nodes at the next step, as their parents have already been dealt with (cut off above). Note also that in therecursive traversal there is a distinction between leaf nodes and branch nodes, while in the corecursive traversal thereis no distinction, as each node is treated as the root node of the subtree it defines.Notably, given an infinite tree,[lower-alpha 6] the corecursive breadth-first traversal will traverse all nodes, just as for afinite tree, while the recursive depth-first traversal will go down one branch and not traverse all nodes, and indeedif traversing post-order, as in this example (or in-order), it will visit no nodes at all, because it never reaches a leaf.This shows the usefulness of corecursion rather than recursion for dealing with infinite data structures.In Python, this can be implemented as follows.[lower-alpha 7] The usual post-order depth-first traversal can be definedas:[lower-alpha 8]

def df(node): if node is not None: df(node.left) df(node.right) print(node.value)

This can then be called by df(t) to print the values of the nodes of the tree in post-order depth-first order.The breadth-first corecursive generator can be defined as:[lower-alpha 9]

Page 12: Recursion Ac

8 CHAPTER 3. CORECURSION

def bf(tree): tree_list = [tree] while tree_list: new_tree_list = [] for tree in tree_list: if tree is not None: yieldtree.value new_tree_list.append(tree.left) new_tree_list.append(tree.right) tree_list = new_tree_list

This can then be called to print the values of the nodes of the tree in breadth-first order:for i in bf(t): print(i)

3.2 Definition

Initial data types can be defined as being the least fixpoint (up to isomorphism) of some type equation; the isomorphismis then given by an initial algebra. Dually, final (or terminal) data types can be defined as being the greatest fixpointof a type equation; the isomorphism is then given by a final coalgebra.If the domain of discourse is the category of sets and total functions, then final data types may contain infinite, non-wellfounded values, whereas initial types do not.[1][2] On the other hand, if the domain of discourse is the categoryof complete partial orders and continuous functions, which corresponds roughly to the Haskell programming lan-guage, then final types coincide with initial types, and the corresponding final coalgebra and initial algebra form anisomorphism.[3]

Corecursion is then a technique for recursively defining functions whose range (codomain) is a final data type, dualto the way that ordinary recursion recursively defines functions whose domain is an initial data type.[4]

The discussion below provides several examples in Haskell that distinguish corecursion. Roughly speaking, if onewere to port these definitions to the category of sets, they would still be corecursive. This informal usage is consistentwith existing textbooks about Haskell.[5] Also note that the examples used in this article predate the attempts to definecorecursion and explain what it is.

3.3 Discussion

The rule for primitive corecursion on codata is the dual to that for primitive recursion on data. Instead of descending onthe argument by pattern-matching on its constructors (that were called up before, somewhere, so we receive a ready-made datum and get at its constituent sub-parts, i.e. “fields”), we ascend on the result by filling-in its “destructors”(or “observers”, that will be called afterwards, somewhere - so we're actually calling a constructor, creating anotherbit of the result to be observed later on). Thus corecursion creates (potentially infinite) codata, whereas ordinaryrecursion analyses (necessarily finite) data. Ordinary recursion might not be applicable to the codata because it mightnot terminate. Conversely, corecursion is not strictly necessary if the result type is data, because data must be finite.Here is an example in Haskell. The following definition produces the list of Fibonacci numbers in linear time:fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

This infinite list depends on lazy evaluation; elements are computed on an as-needed basis, and only finite prefixes areever explicitly represented in memory. This feature allows algorithms on parts of codata to terminate; such techniquesare an important part of Haskell programming.This can be done in Python as well:[6]

from itertools import tee, chain, islice, imap def add(x, y): return x + y def fibonacci(): def deferred_output(): fori in output: yield i result, c1, c2 = tee(deferred_output(), 3) paired = imap(add, c1, islice(c2, 1, None)) output =chain([0, 1], paired) return result for i in islice(fibonacci(), 20): print(i)

The definition of zipWith can be inlined, leading to this:fibs = 0 : 1 : next fibs where next (a: t@(b:_)) = (a+b):next t

This example employs a self-referential data structure. Ordinary recursion makes use of self-referential functions,but does not accommodate self-referential data. However, this is not essential to the Fibonacci example. It can berewritten as follows:

Page 13: Recursion Ac

3.4. HISTORY 9

fibs = fibgen (0,1) fibgen (x,y) = x : fibgen (y,x+y)

This employs only self-referential function to construct the result. If it were used with strict list constructor it wouldbe an example of runaway recursion, but with non-strict list constructor this guarded recursion gradually produces anindefinitely defined list.Corecursion need not produce an infinite object; a corecursive queue[7] is a particularly good example of this phe-nomenon. The following definition produces a breadth-first traversal of a binary tree in linear time:data Tree a b = Leaf a | Branch b (Tree a b) (Tree a b) bftrav :: Tree a b -> [Tree a b] bftrav tree = queue wherequeue = tree : gen 1 queue gen 0 p = [] gen len (Leaf _ : p) = gen (len-1) p gen len (Branch _ l r : p) = l : r : gen (len+1) p

This definition takes an initial tree and produces a list of subtrees. This list serves dual purpose as both the queue andthe result (gen len p produces its output len notches after its input back-pointer, p, along the queue). It is finite if andonly if the initial tree is finite. The length of the queue must be explicitly tracked in order to ensure termination; thiscan safely be elided if this definition is applied only to infinite trees.Another particularly good example gives a solution to the problem of breadth-first labeling.[8] The function label visitsevery node in a binary tree in a breadth first fashion, and replaces each label with an integer, each subsequent integeris bigger than the last by one. This solution employs a self-referential data structure, and the binary tree can be finiteor infinite.label :: Tree a b -> Tree Int Int label t = t′ where (t′,ns) = label′ t (1:ns) label′ :: Tree a b -> [Int] -> (Tree Int Int,[Int]) label′ (Leaf _ ) (n:ns) = (Leaf n , n+1 : ns ) label′ (Branch _ l r) (n:ns) = (Branch n l′ r′ , n+1 : ns′′) where (l′,ns′) = label′ l ns (r′,ns′′) = label′ r ns′

An apomorphism (such as an anamorphism, such as unfold) is a form of corecursion in the sameway that a paramorphism(such as a catamorphism, such as fold) is a form of recursion.The Coq proof assistant supports corecursion and coinduction using the CoFixpoint command.

3.4 History

Corecursion, referred to as circular programming, dates at least to (Bird 1984), who credits John Hughes and PhilipWadler; more general forms were developed in (Allison 1989). The original motivations included producing moreefficient algorithms (allowing 1 pass over data in some cases, instead of requiring multiple passes) and implementingclassical data structures, such as doubly linked lists and queues, in functional languages.

3.5 See also

• Bisimulation

• Coinduction

• Recursion

• Anamorphism

3.6 Notes[1] Not validating input data.

[2] More elegantly, one can start by placing the root node itself in the structure and then iterating.

[3] Post-order is to make “leaf node is base case” explicit for exposition, but the same analysis works for pre-order or in-order.

[4] Breadth-first traversal, unlike depth-first, is unambiguous, and visits a node value before processing children.

Page 14: Recursion Ac

10 CHAPTER 3. CORECURSION

[5] Technically, one may define a breadth-first traversal on an ordered, disconnected set of trees – first the root node of eachtree, then the children of each tree in turn, then the grandchildren in turn, etc.

[6] Assume fixed branching factor (e.g., binary), or at least bounded, and balanced (infinite in every direction).

[7] First defining a tree class, say via:class Tree: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right def__str__(self): return str(self.value)and initializing a tree, say via:t = Tree(1, Tree(2, Tree(4), Tree(5)), Tree(3, Tree(6), Tree(7)))In this example nodes are labeled in breadth-first order: 1 2 3 4 5 6 7

[8] Intuitively, the function iterates over subtrees (possibly empty), then once these are finished, all that is left is the node itself,whose value is then returned; this corresponds to treating a leaf node as basic.

[9] Here the argument (and loop variable) is considered as a whole, possible infinite tree, represented by (identified with) itsroot node (tree = root node), rather than as a potential leaf node, hence the choice of variable name.

3.7 References[1] Barwise and Moss 1996.

[2] Moss and Danner 1997.

[3] Smyth and Plotkin 1982.

[4] Gibbons and Hutton 2005.

[5] Doets and van Eijck 2004.

[6] Hettinger 2009.

[7] Allison 1989; Smith 2009.

[8] Jones and Gibbons 1992.

• Bird, Richard Simpson (1984). “Using circular programs to eliminate multiple traversals of data”. Acta Infor-matica 21 (3): 239–250. doi:10.1007/BF00264249.

• Lloyd Allison (April 1989). “Circular Programs and Self-Referential Structures”. Software Practice and Ex-perience 19 (2): 99–109. doi:10.1002/spe.4380190202.

• Geraint Jones and Jeremy Gibbons (1992). Linear-time breadth-first tree algorithms: An exercise in the arith-metic of folds and zips (Technical report). Dept of Computer Science, University of Auckland.

• Jon Barwise and Lawrence S Moss (June 1996). Vicious Circles. Center for the Study of Language andInformation. ISBN 978-1-57586-009-1.

• Lawrence S Moss and Norman Danner (1997). “On the Foundations of Corecursion”. Logic Journal of theIGPL 5 (2): 231–257. doi:10.1093/jigpal/5.2.231.

• Kees Doets and Jan van Eijck (May 2004). The Haskell Road to Logic, Maths, and Programming. King’sCollege Publications. ISBN 978-0-9543006-9-2.

• David Turner (2004-07-28). “Total Functional Programming”. Journal of Universal Computer Science 10 (7):751–768. doi:10.3217/jucs-010-07-0751.

• Jeremy Gibbons and Graham Hutton (April 2005). “Proof methods for corecursive programs”. FundamentaInformaticae Special Issue on Program Transformation 66 (4): 353–366.

• Leon P Smith (2009-07-29), “Lloyd Allison’s Corecursive Queues: Why Continuations Matter”, The MonadReader (14): 37–68

• Raymond Hettinger (2009-11-19). “Recipe 576961: Technique for cyclical iteration”.

• M. B. Smyth and G. D. Plotkin (1982). “The Category-Theoretic Solution of Recursive Domain Equations”.SIAM Journal on Computing 11 (4): 761–783. doi:10.1137/0211062.

Page 15: Recursion Ac

Chapter 4

Course-of-values recursion

In computability theory, course-of-values recursion is a technique for defining number-theoretic functions byrecursion. In a definition of a function f by course-of-values recursion, the value of f(n+1) is computed from thesequence ⟨f(1), f(2), . . . , f(n)⟩ . The fact that such definitions can be converted into definitions using a simplerform of recursion is often used to prove that functions defined by course-of-values recursion are primitive recursive.This article uses the convention that the natural numbers are the set {1,2,3,4,...}.

4.1 Definition and examples

The factorial function n! is recursively defined by the rules

0! = 1,(n+1)! = (n+1)*(n!).

This recursion is a primitive recursion because it computes the next value (n+1)! of the function based on the valueof n and the previous value n! of the function. On the other hand, the function Fib(n), which returns the nth Fibonaccinumber, is defined with the recursion equations

Fib(0) = 0,Fib(1) = 1,Fib(n+2) = Fib(n+1) + Fib(n).

In order to compute Fib(n+2), the last two values of the Fib function are required. Finally, consider the function gdefined with the recursion equations

g(0) = 0,g(n+ 1) =

∑ni=0 g(i)

n−i .

To compute g(n+1) using these equations, all the previous values of g must be computed; no fixed finite number ofprevious values is sufficient in general for the computation of g. The functions Fib and g are examples of functionsdefined by course-of-values recursion.In general, a function f is defined by course-of-values recursion if there is a fixed primitive recursive function hsuch that for all n,

f(n) = h(n, ⟨f(0), f(1), . . . , f(n− 1)⟩)

where ⟨f(0), f(1), . . . , f(n− 1)⟩ is a Gödel number encoding the indicated sequence. In particular

11

Page 16: Recursion Ac

12 CHAPTER 4. COURSE-OF-VALUES RECURSION

f(0) = h(0, ⟨⟩),

provides the initial value of the recursion. The function hmight test its first argument to provide explicit initial values,for instance for Fib one could use the function defined by

h(n, s) =

{n ifn < 2

s[n− 2] + s[n− 1] ifn ≥ 2

where s[i] denotes extraction of the element i from an encoded sequence s; this is easily seen to be a primitive recursivefunction (assuming an appropriate Gödel numbering is used).

4.2 Equivalence to primitive recursion

In order to convert a definition by course-of-values recursion into a primitive recursion, an auxiliary (helper) functionis used. Suppose that one wants to have

f(n) = h(n, ⟨f(0), f(1), . . . , f(n− 1)⟩)

To define f using primitive recursion, first define the auxiliary course-of-values function that should satisfy

f̄(n) = ⟨f(0), f(1), . . . , f(n− 1)⟩.

Thus f̄(n) encodes the first n values of f. The function f̄ can be defined by primitive recursion because f̄(n+ 1) isobtained by appending to f̄(n) the new element h(n, f̄(n)) :

f̄(0) = ⟨⟩

f̄(n+ 1) = append(n, f̄(n), h(n, f̄(n))),

where append(n,s,x) computes, whenever s encodes a sequence of length n, a new sequence t of length n + 1 such thatt[n] = x and t[i] = s[i] for all i < n (again this is a primitive recursive function, under the assumption of an appropriateGödel numbering).Given f̄ , the original function f can be defined by f(n) = f̄(n + 1)[n] , which shows that it is also a primitiverecursive function.

4.3 Application to primitive recursive functions

In the context of primitive recursive functions, it is convenient to have a means to represent finite sequences of naturalnumbers as single natural numbers. One such method, Gödel’s encoding, represents a sequence ⟨n1, n2, . . . , nk⟩ as

k∏i=1

pnii

where pi represent the ith prime. It can be shown that, with this representation, the ordinary operations on sequencesare all primitive recursive. These operations include

• Determining the length of a sequence,

• Extracting an element from a sequence given its index,

Page 17: Recursion Ac

4.4. REFERENCES 13

• Concatenating two sequences.

Using this representation of sequences, it can be seen that if h(m) is primitive recursive then the function

f(n) = h(⟨f(1), f(2), . . . , f(n− 1)⟩)

is also primitive recursive.When the natural numbers are taken to begin with zero, the sequence ⟨n1, n2, . . . , nk⟩ is instead represented as

k∏i=1

p(ni+1)i

which makes it possible to distinguish the codes for the sequences ⟨0⟩ and ⟨0, 0⟩ .

4.4 References• Hinman, P.G., 2006, Fundamentals of Mathematical Logic, A K Peters.

• Odifreddi, P.G., 1989, Classical Recursion Theory, North Holland; second edition, 1999.

Page 18: Recursion Ac

Chapter 5

Recursion

For other uses, see Recursion (disambiguation).

Recursion is the process of repeating items in a self-similar way. For instance, when the surfaces of two mirrors areexactly parallel with each other, the nested images that occur are a form of infinite recursion. The term has a varietyof meanings specific to a variety of disciplines ranging from linguistics to logic. The most common application ofrecursion is in mathematics and computer science, in which it refers to a method of defining functions in which thefunction being defined is applied within its own definition. Specifically, this defines an infinite number of instances(function values), using a finite expression that for some instances may refer to other instances, but in such a waythat no loop or infinite chain of references can occur. The term is also used more generally to describe a process ofrepeating objects in a self-similar way.

5.1 Formal definitions

In mathematics and computer science, a class of objects or methods exhibit recursive behavior when they can bedefined by two properties:

1. A simple base case (or cases)—a terminating scenario that does not use recursion to produce an answer

2. A set of rules that reduce all other cases toward the base case

For example, the following is a recursive definition of a person’s ancestors:

• One’s parents are one’s ancestors (base case).

• The ancestors of one’s ancestors are also one’s ancestors (recursion step).

The Fibonacci sequence is a classic example of recursion:Fib(0) = 01, case base asFib(1) = 12, case base asintegers all Forn > 1, Fib (n) := Fib(n− 1) + Fib(n− 2).

Many mathematical axioms are based upon recursive rules. For example, the formal definition of the natural numbersby the Peano axioms can be described as: 0 is a natural number, and each natural number has a successor, which isalso a natural number. By this base case and recursive rule, one can generate the set of all natural numbers.Recursively defined mathematical objects include functions, sets, and especially fractals.There are various more tongue-in-cheek “definitions” of recursion; see recursive humor.

14

Page 19: Recursion Ac

5.2. INFORMAL DEFINITION 15

5.2 Informal definition

Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking theprocedure itself. A procedure that goes through recursion is said to be 'recursive'.To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. Aprocedure is a set of steps based on a set of rules. The running of a procedure involves actually following the rules andperforming the steps. An analogy: a procedure is like a written recipe; running a procedure is like actually preparingthe meal.Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution ofsome other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that inturn requires heating water, and so forth. However, a recursive procedure is where (at least) one of its steps callsfor a new instance of the very same procedure, like a sourdough recipe calling for some dough left over from thelast time the same recipe was made. This of course immediately creates the possibility of an endless loop; recursioncan only be properly used in a definition if the step in question is skipped in certain cases so that the procedurecan complete, like a sourdough recipe that also tells you how to get some starter dough in case you've never made itbefore. Even if properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishingthe new from the old (partially executed) invocation of the procedure; this requires some administration of how farvarious simultaneous instances of the procedures have progressed. For this reason recursive definitions are very rarein everyday situations. An example could be the following procedure to find a way through a maze. Proceed forwarduntil reaching either an exit or a branching point (a dead end is considered a branching point with 0 branches). Ifthe point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively; if everytrial fails by reaching only dead ends, return on the path that led to this branching point and report failure. Whetherthis actually defines a terminating procedure depends on the nature of the maze: it must not allow loops. In anycase, executing the procedure requires carefully recording all currently explored branching points, and which of theirbranches have already been exhaustively tried.

5.3 In language

Linguist Noam Chomsky among many others has argued that the lack of an upper bound on the number of gram-matical sentences in a language, and the lack of an upper bound on grammatical sentence length (beyond practicalconstraints such as the time available to utter one), can be explained as the consequence of recursion in naturallanguage.[1][2] This can be understood in terms of a recursive definition of a syntactic category, such as a sentence. Asentence can have a structure in which what follows the verb is another sentence: Dorothy thinks witches are danger-ous, in which the sentence witches are dangerous occurs in the larger one. So a sentence can be defined recursively(very roughly) as something with a structure that includes a noun phrase, a verb, and optionally another sentence.This is really just a special case of the mathematical definition of recursion.This provides a way of understanding the creativity of language—the unbounded number of grammatical sentences—because it immediately predicts that sentences can be of arbitrary length: Dorothy thinks that Toto suspects thatTin Man said that.... Of course, there are many structures apart from sentences that can be defined recursively,and therefore many ways in which a sentence can embed instances of one category inside another. Over the years,languages in general have proved amenable to this kind of analysis.Recently, however, the generally-accepted idea that recursion is an essential property of human language has beenchallenged by Daniel Everett on the basis of his claims about the Pirahã language. Andrew Nevins, David Pesetskyand Cilene Rodrigues are among many who that have argued against this.[3] Literary self-reference can in any casebe argued to be different in kind from mathematical or logical recursion.[4]

Recursion plays a crucial role not only in syntax, but also in natural language semantics. The word and, for example,can be construed as a function that can apply to sentence meanings to create new sentences, and likewise for nounphrase meanings, verb phrase meanings, and others. It can also apply to intransitive verbs, transitive verbs, or ditran-sitive verbs. In order to provide a single denotation for it that is suitably flexible, and is typically defined so that itcan take any of these different types of meanings as arguments. This can be done by defining it for a simple case inwhich it combines sentences, and then defining the other cases recursively in terms of the simple one.[5]

Page 20: Recursion Ac

16 CHAPTER 5. RECURSION

5.3.1 Recursive humor

Recursion is sometimes used humorously in computer science, programming, philosophy, or mathematics textbooks,generally by giving a circular definition or self-reference, in which the putative recursive step does not get closer toa base case, but instead leads to an infinite regress. It is not unusual for such books to include a joke entry in theirglossary along the lines of:

Recursion, see Recursion.[6]

A variation is found on page 269 in the index of some editions of Brian Kernighan and Dennis Ritchie's book TheC Programming Language; the index entry recursively references itself (“recursion 86, 139, 141, 182, 202, 269”).The earliest version of this joke was in “Software Tools” by Kernighan and Plauger, and also appears in “The UNIXProgramming Environment” by Kernighan and Pike. It did not appear in the first edition of The C ProgrammingLanguage.Another joke is that “To understand recursion, you must understand recursion.”[6] In the English-language version ofthe Google web search engine, when a search for “recursion” is made, the site suggests “Did you mean: recursion.”An alternative form is the following, from Andrew Plotkin: “If you already know what recursion is, just remember theanswer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her whatrecursion is.”

Recursive acronyms can also be examples of recursive humor. PHP, for example, stands for “PHP Hypertext Pre-processor”, WINE stands for “Wine Is Not an Emulator.” and GNU stands for “GNU’s not Unix”.

5.4 In mathematics

5.4.1 Recursively defined sets

Main article: Recursive definition

Example: the natural numbers

See also: Closure (mathematics)

The canonical example of a recursively defined set is given by the natural numbers:

0 is in Nif n is in N , then n + 1 is in NThe set of natural numbers is the smallest set satisfying the previous two properties.

Example: The set of true reachable propositions

Another interesting example is the set of all “true reachable” propositions in an axiomatic system.

• If a proposition is an axiom, it is a true reachable proposition.

• If a proposition can be obtained from true reachable propositions by means of inference rules, it is a truereachable proposition.

• The set of true reachable propositions is the smallest set of propositions satisfying these conditions.

This set is called 'true reachable propositions’ because in non-constructive approaches to the foundations of mathe-matics, the set of true propositions may be larger than the set recursively constructed from the axioms and rules ofinference. See also Gödel’s incompleteness theorems.

Page 21: Recursion Ac

5.5. IN COMPUTER SCIENCE 17

5.4.2 Finite subdivision rules

Main article: Finite subdivision rule

Finite subdivision rules are a geometric form of recursion, which can be used to create fractal-like images. A subdi-vision rule starts with a collection of polygons labelled by finitely many labels, and then each polygon is subdividedinto smaller labelled polygons in a way that depends only on the labels of the original polygon. This process can beiterated. The standard `middle thirds’ technique for creating the Cantor set is a subdivision rule, as is barycentricsubdivision.

5.4.3 Functional recursion

A function may be partly defined in terms of itself. A familiar example is the Fibonacci number sequence: F(n) =F(n − 1) + F(n − 2). For such a definition to be useful, it must lead to non-recursively defined values, in this caseF(0) = 0 and F(1) = 1.A famous recursive function is the Ackermann function, which—unlike the Fibonacci sequence—cannot easily beexpressed without recursion.

5.4.4 Proofs involving recursive definitions

Applying the standard technique of proof by cases to recursively defined sets or functions, as in the preceding sec-tions, yields structural induction, a powerful generalization of mathematical induction widely used to derive proofsin mathematical logic and computer science.

5.4.5 Recursive optimization

Dynamic programming is an approach to optimization that restates a multiperiod or multistep optimization problemin recursive form. The key result in dynamic programming is the Bellman equation, which writes the value of theoptimization problem at an earlier time (or earlier step) in terms of its value at a later time (or later step).

5.5 In computer science

Main article: Recursion (computer science)

A common method of simplification is to divide a problem into subproblems of the same type. As a computerprogramming technique, this is called divide and conquer and is key to the design of many important algorithms.Divide and conquer serves as a top-down approach to problem solving, where problems are solved by solving smallerand smaller instances. A contrary approach is dynamic programming. This approach serves as a bottom-up approach,where problems are solved by solving larger and larger instances, until the desired size is reached.A classic example of recursion is the definition of the factorial function, given here in C code:unsigned int factorial(unsigned int n) { if (n == 0) { return 1; } else { return n * factorial(n - 1); } }

The function calls itself recursively on a smaller version of the input (n - 1) and multiplies the result of the recursivecall by n, until reaching the base case, analogously to the mathematical definition of factorial.Recursion in computer programming is exemplified when a function is defined in terms of simpler, often smallerversions of itself. The solution to the problem is then devised by combining the solutions obtained from the simplerversions of the problem. One example application of recursion is in parsers for programming languages. The greatadvantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed orproduced by a finite computer program.Recurrence relations are equations to define one or more sequences recursively. Some specific kinds of recurrencerelation can be “solved” to obtain a non-recursive definition.

Page 22: Recursion Ac

18 CHAPTER 5. RECURSION

Use of recursion in an algorithm has both advantages and disadvantages. The main advantage is usually simplicity.The main disadvantage is often that the algorithm may require large amounts of memory if the depth of the recursionis very large.

5.6 In art

The Russian Doll or Matryoshka Doll is a physical artistic example of the recursive concept.

5.7 The recursion theorem

In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set X, an element a ofX and a function f : X → X , the theorem states that there is a unique function F : N → X (where N denotes theset of natural numbers including zero) such that

F (0) = a

F (n+ 1) = f(F (n))

for any natural number n.

5.7.1 Proof of uniqueness

Take two functions F : N → X and G : N → X such that:

F (0) = a

G(0) = a

F (n+ 1) = f(F (n))

G(n+ 1) = f(G(n))

where a is an element of X.It can be proved by mathematical induction that F (n) = G(n) for all natural numbers n:

Base Case: F (0) = a = G(0) so the equality holds for n = 0 .

Inductive Step: Suppose F (k) = G(k) for some k ∈ N . Then F (k + 1) = f(F (k)) = f(G(k)) =G(k + 1).

Hence F(k) = G(k) implies F(k+1) = G(k+1).

By induction, F (n) = G(n) for all n ∈ N .

5.7.2 Examples

Some common recurrence relations are:

• Golden Ratio: ϕ = 1 + (1/ϕ) = 1 + (1/(1 + (1/(1 + 1/...))))

• Factorial: n! = n(n− 1)! = n(n− 1) · · · 1

• Fibonacci numbers: f(n) = f(n− 1) + f(n− 2)

Page 23: Recursion Ac

5.8. SEE ALSO 19

• Catalan numbers: C0 = 1 , Cn+1 = (4n+ 2)Cn/(n+ 2)

• Computing compound interest

• The Tower of Hanoi

• Ackermann function

5.8 See also• Corecursion

• Course-of-values recursion

• Digital infinity

• Fixed point combinator

• Infinite loop

• Infinitism

• Iterated function

• Mise en abyme

• Reentrant (subroutine)

• Self-reference

• Strange loop

• Tail recursion

• Tupper’s self-referential formula

• Turtles all the way down

5.9 Bibliography• Dijkstra, EdsgerW. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

• Johnsonbaugh, Richard (2004). Discrete Mathematics. Prentice Hall. ISBN 0-13-117686-2.

• Hofstadter, Douglas (1999). Gödel, Escher, Bach: an Eternal Golden Braid. Basic Books. ISBN 0-465-02656-7.

• Shoenfield, Joseph R. (2000). Recursion Theory. A K Peters Ltd. ISBN 1-56881-149-7.

• Causey, Robert L. (2001). Logic, Sets, and Recursion. Jones & Bartlett. ISBN 0-7637-1695-2.

• Cori, Rene; Lascar, Daniel; Pelletier, Donald H. (2001). Recursion Theory, Gödel’s Theorems, Set Theory,Model Theory. Oxford University Press. ISBN 0-19-850050-5.

• Barwise, Jon; Moss, Lawrence S. (1996). Vicious Circles. Stanford Univ Center for the Study of Languageand Information. ISBN 0-19-850050-5. - offers a treatment of corecursion.

• Rosen, Kenneth H. (2002). Discrete Mathematics and Its Applications. McGraw-Hill College. ISBN 0-07-293033-0.

• Cormen, Thomas H., Charles E. Leiserson, Ronald L. Rivest, Clifford Stein (2001). Introduction to Algorithms.Mit Pr. ISBN 0-262-03293-7.

Page 24: Recursion Ac

20 CHAPTER 5. RECURSION

• Kernighan, B.; Ritchie, D. (1988). The C programming Language. Prentice Hall. ISBN 0-13-110362-8.

• Stokey, Nancy,; Robert Lucas; Edward Prescott (1989). Recursive Methods in Economic Dynamics. HarvardUniversity Press. ISBN 0-674-75096-9.

• Hungerford (1980). Algebra. Springer. ISBN 978-0-387-90518-1., first chapter on set theory.

5.10 References[1] Pinker, Steven (1994). The Language Instinct. William Morrow.

[2] Pinker, Steven; Jackendoff, Ray (2005). “The faculty of language: What’s so special about it?". Cognition 95 (2): 201–236.doi:10.1016/j.cognition.2004.08.004. PMID 15694646.

[3] Nevins, Andrew; Pesetsky, David; Rodrigues, Cilene (2009). “Evidence and argumentation: A reply to Everett (2009)"(PDF). Language 85 (3): 671–681. doi:10.1353/lan.0.0140.

[4] Drucker, Thomas (4 January 2008). Perspectives on the History of Mathematical Logic. Springer Science & BusinessMedia. p. 110. ISBN 978-0-8176-4768-1.

[5] Barbara Partee and Mats Rooth. 1983. In Rainer Bäuerle et al., Meaning, Use, and Interpretation of Language. Reprintedin Paul Portner and Barbara Partee, eds. 2002. Formal Semantics: The Essential Readings. Blackwell.

[6] Hunter, David (2011). Essentials of Discrete Mathematics. Jones and Bartlett. p. 494.

5.11 External links• Recursion - tutorial by Alan Gauld

• A Primer on Recursion- contains pointers to recursion in Formal Languages, Linguistics, Math and ComputerScience

• Zip Files All The Way Down

• Nevins, Andrew and David Pesetsky and Cilene Rodrigues. Evidence and Argumentation: A Reply to Everett(2009). Language 85.3: 671-−681 (2009)

Page 25: Recursion Ac

5.11. EXTERNAL LINKS 21

A visual form of recursion known as theDroste effect. The woman in this image holds an object that contains a smaller image of herholding an identical object, which in turn contains a smaller image of herself holding an identical object, and so forth. Advertisementfor Droste cocoa, c. 1900

Page 26: Recursion Ac

22 CHAPTER 5. RECURSION

Ouroboros, an ancient symbol depicting a serpent or dragon eating its own tail.

Page 27: Recursion Ac

5.11. EXTERNAL LINKS 23

Recently refreshed sourdough, bubbling through fermentation: the recipe calls for some sourdough left over from the last time thesame recipe was made.

Page 28: Recursion Ac

24 CHAPTER 5. RECURSION

The Sierpinski triangle—a confined recursion of triangles that form a fractal

Page 29: Recursion Ac

Chapter 6

Recursion (computer science)

This article is about recursive approaches to solving problems. For recursion in computer science acronyms, seeRecursive acronym#Computer-related examples.Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instancesof the same problem (as opposed to iteration).[1] The approach can be applied to many types of problems, andrecursion is one of the central ideas of computer science.[2]

“The power of recursion evidently lies in the possibility of defining an infinite set of objects by afinite statement. In the same manner, an infinite number of computations can be described by a finiterecursive program, even if this program contains no explicit repetitions.”[3]

Most computer programming languages support recursion by allowing a function to call itself within the programtext. Some functional programming languages do not define any looping constructs but rely solely on recursion torepeatedly call code. Computability theory proves that these recursive-only languages are Turing complete; they areas computationally powerful as Turing complete imperative languages, meaning they can solve the same kinds ofproblems as imperative languages even without iterative control structures such as “while” and “for”.

6.1 Recursive functions and algorithms

A common computer programming tactic is to divide a problem into sub-problems of the same type as the original,solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; whencombined with a lookup table that stores the results of solving sub-problems (to avoid solving them repeatedly andincurring extra computation time), it can be referred to as dynamic programming or memoization.A recursive function definition has one or more base cases, meaning input(s) for which the function produces a resulttrivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (callsitself). For example, the factorial function can be defined recursively by the equations 0! = 1 and, for all n > 0, n!= n(n − 1)!. Neither equation by itself constitutes a complete definition; the first is the base case, and the second isthe recursive case. Because the base case breaks the chain of recursion, it is sometimes also called the “terminatingcase”.The job of the recursive cases can be seen as breaking down complex inputs into simpler ones. In a properly designedrecursive function, with each recursive call, the input problemmust be simplified in such a way that eventually the basecase must be reached. (Functions that are not intended to terminate under normal circumstances—for example, somesystem and server processes—are an exception to this.) Neglecting to write a base case, or testing for it incorrectly,can cause an infinite loop.For some functions (such as one that computes the series for e = 1/0! + 1/1! + 1/2! + 1/3! + ...) there is not anobvious base case implied by the input data; for these one may add a parameter (such as the number of terms to beadded, in our series example) to provide a 'stopping criterion' that establishes the base case. Such an example is morenaturally treated by co-recursion, where successive terms in the output are the partial sums; this can be converted toa recursion by using the indexing parameter to say “compute the nth term (nth partial sum)".

25

Page 30: Recursion Ac

26 CHAPTER 6. RECURSION (COMPUTER SCIENCE)

Tree created using the Logo programming language and relying heavily on recursion

6.2 Recursive data types

Many computer programs must process or generate an arbitrarily large quantity of data. Recursion is one techniquefor representing data whose exact size the programmer does not know: the programmer can specify this data with aself-referential definition. There are two types of self-referential definitions: inductive and coinductive definitions.Further information: Algebraic data type

Page 31: Recursion Ac

6.3. TYPES OF RECURSION 27

6.2.1 Inductively defined data

Main article: Recursive data type

An inductively defined recursive data definition is one that specifies how to construct instances of the data. Forexample, linked lists can be defined inductively (here, using Haskell syntax):

data ListOfStrings = EmptyList | Cons String ListOfStrings

The code above specifies a list of strings to be either empty, or a structure that contains a string and a list of strings.The self-reference in the definition permits the construction of lists of any (finite) number of strings.Another example of inductive definition is the natural numbers (or positive integers):

A natural number is either 1 or n+1, where n is a natural number.

Similarly recursive definitions are often used to model the structure of expressions and statements in programminglanguages. Language designers often express grammars in a syntax such as Backus-Naur form; here is such a gram-mar, for a simple language of arithmetic expressions with multiplication and addition:<expr> ::= <number> | (<expr> * <expr>) | (<expr> + <expr>)

This says that an expression is either a number, a product of two expressions, or a sum of two expressions. Byrecursively referring to expressions in the second and third lines, the grammar permits arbitrarily complex arithmeticexpressions such as (5 * ((3 * 6) + 8)), with more than one product or sum operation in a single expression.

6.2.2 Coinductively defined data and corecursion

Main articles: Coinduction and Corecursion

A coinductive data definition is one that specifies the operations that may be performed on a piece of data; typically,self-referential coinductive definitions are used for data structures of infinite size.A coinductive definition of infinite streams of strings, given informally, might look like this:A stream of strings is an object s such that: head(s) is a string, and tail(s) is a stream of strings.This is very similar to an inductive definition of lists of strings; the difference is that this definition specifies how toaccess the contents of the data structure—namely, via the accessor functions head and tail—and what those contentsmay be, whereas the inductive definition specifies how to create the structure and what it may be created from.Corecursion is related to coinduction, and can be used to compute particular instances of (possibly) infinite objects. Asa programming technique, it is used most often in the context of lazy programming languages, and can be preferableto recursion when the desired size or precision of a program’s output is unknown. In such cases the program requiresboth a definition for an infinitely large (or infinitely precise) result, and a mechanism for taking a finite portion of thatresult. The problem of computing the first n prime numbers is one that can be solved with a corecursive program(e.g. here).

6.3 Types of recursion

6.3.1 Single recursion and multiple recursion

Recursion that only contains a single self-reference is known as single recursion, while recursion that contains mul-tiple self-references is known as multiple recursion. Standard examples of single recursion include list traversal,

Page 32: Recursion Ac

28 CHAPTER 6. RECURSION (COMPUTER SCIENCE)

such as in a linear search, or computing the factorial function, while standard examples of multiple recursion includetree traversal, such as in a depth-first search, or computing the Fibonacci sequence.Single recursion is often much more efficient than multiple recursion, and can generally be replaced by an iterativecomputation, running in linear time and requiring constant space. Multiple recursion, by contrast, may require ex-ponential time and space, and is more fundamentally recursive, not being able to be replaced by iteration without anexplicit stack.Multiple recursion can sometimes be converted to single recursion (and, if desired, thence to iteration). For example,while computing the Fibonacci sequence naively is multiple iteration, as each value requires two previous values, itcan be computed by single recursion by passing two successive values as parameters. This is more naturally framedas corecursion, building up from the initial values, tracking at each step two successive values – see corecursion:examples. A more sophisticated example is using a threaded binary tree, which allows iterative tree traversal, ratherthan multiple recursion.

6.3.2 Indirect recursion

Main article: Mutual recursion

Most basic examples of recursion, and most of the examples presented here, demonstrate direct recursion, in whicha function calls itself. Indirect recursion occurs when a function is called not by itself but by another function thatit called (either directly or indirectly). For example, if f calls f, that is direct recursion, but if f calls g which callsf, then that is indirect recursion of f. Chains of three or more functions are possible; for example, function 1 callsfunction 2, function 2 calls function 3, and function 3 calls function 1 again.Indirect recursion is also called mutual recursion, which is a more symmetric term, though this is simply a differenceof emphasis, not a different notion. That is, if f calls g and then g calls f, which in turn calls g again, from the point ofview of f alone, f is indirectly recursing, while from the point of view of g alone, it is indirectly recursing, while fromthe point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functionsthat call each other can be called a set of mutually recursive functions.

6.3.3 Anonymous recursion

Main article: Anonymous recursion

Recursion is usually done by explicitly calling a function by name. However, recursion can also be done via implicitlycalling a function based on the current context, which is particularly useful for anonymous functions, and is knownas anonymous recursion.

6.3.4 Structural versus generative recursion

See also: Structural recursion

Some authors classify recursion as either “structural” or “generative”. The distinction is related to where a recursiveprocedure gets the data that it works on, and how it processes that data:

[Functions that consume structured data] typically decompose their arguments into their immediatestructural components and then process those components. If one of the immediate components belongsto the same class of data as the input, the function is recursive. For that reason, we refer to these functionsas (STRUCTURALLY) RECURSIVE FUNCTIONS.[4]

Thus, the defining characteristic of a structurally recursive function is that the argument to each recursive call isthe content of a field of the original input. Structural recursion includes nearly all tree traversals, including XMLprocessing, binary tree creation and search, etc. By considering the algebraic structure of the natural numbers (thatis, a natural number is either zero or the successor of a natural number), functions such as factorial may also beregarded as structural recursion.

Page 33: Recursion Ac

6.4. RECURSIVE PROGRAMS 29

Generative recursion is the alternative:

Many well-known recursive algorithms generate an entirely new piece of data from the given dataand recur on it. HtDP (How To Design Programs) refers to this kind as generative recursion. Examplesof generative recursion include: gcd, quicksort, binary search, mergesort, Newton’s method, fractals,and adaptive integration.[5]

This distinction is important in proving termination of a function.

• All structurally recursive functions on finite (inductively defined) data structures can easily be shown to termi-nate, via structural induction: intuitively, each recursive call receives a smaller piece of input data, until a basecase is reached.

• Generatively recursive functions, in contrast, do not necessarily feed smaller input to their recursive calls, soproof of their termination is not necessarily as simple, and avoiding infinite loops requires greater care. Thesegeneratively recursive functions can often be interpreted as corecursive functions – each step generates the newdata, such as successive approximation in Newton’s method – and terminating this corecursion requires thatthe data eventually satisfy some condition, which is not necessarily guaranteed.

• In terms of loop variants, structural recursion is when there is an obvious loop variant, namely size or com-plexity, which starts off finite and decreases at each recursive step.

• By contrast, generative recursion is when there is not such an obvious loop variant, and termination depends ona function, such as “error of approximation” that does not necessarily decrease to zero, and thus termination isnot guaranteed without further analysis.

6.4 Recursive programs

6.4.1 Recursive procedures

Factorial

A classic example of a recursive procedure is the function used to calculate the factorial of a natural number:

fact(n) ={1 if n = 0

n · fact(n− 1) if n > 0

The function can also be written as a recurrence relation:

bn = nbn−1

b0 = 1

This evaluation of the recurrence relation demonstrates the computation that would be performed in evaluating thepseudocode above:This factorial function can also be described without using recursion by making use of the typical looping constructsfound in imperative programming languages:The imperative code above is equivalent to this mathematical definition using an accumulator variable t:

fact(n) = factacc(n, 1)

factacc(n, t) =

{t if n = 0

factacc(n− 1, nt) if n > 0

The definition above translates straightforwardly to functional programming languages such as Scheme; this is anexample of iteration implemented recursively.

Page 34: Recursion Ac

30 CHAPTER 6. RECURSION (COMPUTER SCIENCE)

Greatest common divisor

The Euclidean algorithm, which computes the greatest common divisor of two integers, can be written recursively.Function definition:

gcd(x, y) ={x if y = 0

gcd(y, remainder(x, y)) if y > 0

Recurrence relation for greatest common divisor, where x%y expresses the remainder of x/y :

gcd(x, y) = gcd(y, x%y) if y ̸= 0

gcd(x, 0) = x

The recursive program above is tail-recursive; it is equivalent to an iterative algorithm, and the computation shownabove shows the steps of evaluation that would be performed by a language that eliminates tail calls. Below is a versionof the same algorithm using explicit iteration, suitable for a language that does not eliminate tail calls. By maintainingits state entirely in the variables x and y and using a looping construct, the program avoids making recursive calls andgrowing the call stack.The iterative algorithm requires a temporary variable, and even given knowledge of the Euclidean algorithm it is moredifficult to understand the process by simple inspection, although the two algorithms are very similar in their steps.

Towers of Hanoi

Towers of Hanoi

Main article: Towers of Hanoi

The Towers of Hanoi is a mathematical puzzle whose solution illustrates recursion.[6][7] There are three pegs whichcan hold stacks of disks of different diameters. A larger disk may never be stacked on top of a smaller. Starting withn disks on one peg, they must be moved to another peg one at a time. What is the smallest number of steps to movethe stack?Function definition:

hanoi(n) ={1 if n = 1

2 · hanoi(n− 1) + 1 if n > 1

Page 35: Recursion Ac

6.4. RECURSIVE PROGRAMS 31

Recurrence relation for hanoi:

hn = 2hn−1 + 1

h1 = 1

Example implementations:Although not all recursive functions have an explicit solution, the Tower of Hanoi sequence can be reduced to anexplicit formula.[8]

Binary search

The binary search algorithm is a method of searching a sorted array for a single element by cutting the array in halfwith each recursive pass. The trick is to pick a midpoint near the center of the array, compare the data at that pointwith the data being searched and then responding to one of three possible conditions: the data is found at the midpoint,the data at the midpoint is greater than the data being searched for, or the data at the midpoint is less than the databeing searched for.Recursion is used in this algorithm because with each pass a new array is created by cutting the old one in half. Thebinary search procedure is then called recursively, this time on the new (and smaller) array. Typically the array’ssize is adjusted by manipulating a beginning and ending index. The algorithm exhibits a logarithmic order of growthbecause it essentially divides the problem domain in half with each pass.Example implementation of binary search in C:/* Call binary_search with proper initial conditions. INPUT: data is an array of integers SORTED in ASCEND-ING order, toFind is the integer to search for, count is the total number of elements in the array OUTPUT: resultof binary_search */ int search(int *data, int toFind, int count) { // Start = 0 (beginning index) // End = count - 1(top index) return binary_search(data, toFind, 0, count-1); } /* Binary Search Algorithm. INPUT: data is a arrayof integers SORTED in ASCENDING order, toFind is the integer to search for, start is the minimum array index,end is the maximum array index OUTPUT: position of the integer toFind within array data, −1 if not found */ intbinary_search(int *data, int toFind, int start, int end) { //Get the midpoint. int mid = start + (end - start)/2; //Inte-ger division //Stop condition. if (start > end) return −1; else if (data[mid] == toFind) //Found? return mid; else if(data[mid] > toFind) //Data is greater than toFind, search lower half return binary_search(data, toFind, start, mid-1);else //Data is less than toFind, search upper half return binary_search(data, toFind, mid+1, end); }

6.4.2 Recursive data structures (structural recursion)

Main article: Recursive data type

An important application of recursion in computer science is in defining dynamic data structures such as lists and trees.Recursive data structures can dynamically grow to a theoretically infinite size in response to runtime requirements; incontrast, the size of a static array must be set at compile time.

“Recursive algorithms are particularly appropriate when the underlying problem or the data to betreated are defined in recursive terms.”[9]

The examples in this section illustrate what is known as “structural recursion”. This term refers to the fact that therecursive procedures are acting on data that is defined recursively.

As long as a programmer derives the template from a data definition, functions employ structural re-cursion. That is, the recursions in a function’s body consume some immediate piece of a given compoundvalue.[5]

Page 36: Recursion Ac

32 CHAPTER 6. RECURSION (COMPUTER SCIENCE)

Linked lists

Main article: Linked list

Below is a C definition of a linked list node structure. Notice especially how the node is defined in terms of itself.The “next” element of struct node is a pointer to another struct node, effectively creating a list type.struct node { int data; // some integer data struct node *next; // pointer to another struct node };

Because the struct node data structure is defined recursively, procedures that operate on it can be implemented natu-rally as recursive procedures. The list_print procedure defined below walks down the list until the list is empty (i.e.,the list pointer has a value of NULL). For each node it prints the data element (an integer). In the C implementation,the list remains unchanged by the list_print procedure.void list_print(struct node *list) { if (list != NULL) // base case { printf ("%d ", list->data); // print integer datafollowed by a space list_print (list->next); // recursive call on the next node } }

Binary trees

Main article: Binary tree

Below is a simple definition for a binary tree node. Like the node for linked lists, it is defined in terms of itself,recursively. There are two self-referential pointers: left (pointing to the left sub-tree) and right (pointing to the rightsub-tree).struct node { int data; // some integer data struct node *left; // pointer to the left subtree struct node *right; // pointto the right subtree };

Operations on the tree can be implemented using recursion. Note that because there are two self-referencing pointers(left and right), tree operations may require two recursive calls:// Test if tree_node contains i; return 1 if so, 0 if not. int tree_contains(struct node *tree_node, int i) { if (tree_node== NULL) return 0; // base case else if (tree_node->data == i) return 1; else return tree_contains(tree_node->left,i) || tree_contains(tree_node->right, i); }

At most two recursive calls will be made for any given call to tree_contains as defined above.// Inorder traversal: void tree_print(struct node *tree_node) { if (tree_node !=NULL) { // base case tree_print(tree_node->left); // go left printf("%d ", tree_node->data); // print the integer followed by a space tree_print(tree_node->right);// go right } }

The above example illustrates an in-order traversal of the binary tree. A Binary search tree is a special case of thebinary tree where the data elements of each node are in order.

Filesystem traversal

Since the number of files in a filesystem may vary, recursion is the only practical way to traverse and thus enumerateits contents. Traversing a filesystem is very similar to that of tree traversal, therefore the concepts behind tree traversalare applicable to traversing a filesystem. More specifically, the code belowwould be an example of a preorder traversalof a filesystem.import java.io.*; public class FileSystem { public static void main (String [] args) { traverse (); } /** * Obtainsthe filesystem roots * Proceeds with the recursive filesystem traversal */ private static void traverse () { File [] fs =File.listRoots (); for (int i = 0; i < fs.length; i++) { if (fs[i].isDirectory () && fs[i].canRead ()) { rtraverse (fs[i]); } }} /** * Recursively traverse a given directory * * @param fd indicates the starting point of traversal */ private staticvoid rtraverse (File fd) { File [] fss = fd.listFiles (); for (int i = 0; i < fss.length; i++) { System.out.println (fss[i]); if(fss[i].isDirectory () && fss[i].canRead ()) { rtraverse (fss[i]); } } } }

Page 37: Recursion Ac

6.5. IMPLEMENTATION ISSUES 33

This code blends the lines, at least somewhat, between recursion and iteration. It is, essentially, a recursive imple-mentation, which is the best way to traverse a filesystem. It is also an example of direct and indirect recursion. Themethod “rtraverse” is purely a direct example; the method “traverse” is the indirect, which calls “rtraverse.” This ex-ample needs no “base case” scenario due to the fact that there will always be some fixed number of files or directoriesin a given filesystem.

6.5 Implementation issues

In actual implementation, rather than a pure recursive function (single check for base case, otherwise recursive step),a number of modifications may be made, for purposes of clarity or efficiency. These include:

• Wrapper function (at top)

• Short-circuiting the base case, aka “Arm’s-length recursion” (at bottom)

• Hybrid algorithm (at bottom) – switching to a different algorithm once data is small enough

On the basis of elegance, wrapper functions are generally approved, while short-circuiting the base case is frownedupon, particularly in academia. Hybrid algorithms are often used for efficiency, to reduce the overhead of recursionin small cases, and arm’s-length recursion is a special case of this.

6.5.1 Wrapper function

A wrapper function is a function that is directly called but does not recurse itself, instead calling a separate auxiliaryfunction which actually does the recursion.Wrapper functions can be used to validate parameters (so the recursive function can skip these), perform initializa-tion (allocate memory, initialize variables), particularly for auxiliary variables such as “level of recursion” or partialcomputations for memoization, and handle exceptions and errors. In languages that support nested functions, the aux-iliary function can be nested inside the wrapper function and use a shared scope. In the absence of nested functions,auxiliary functions are instead a separate function, if possible private (as they are not called directly), and informationis shared with the wrapper function by using pass-by-reference.

6.5.2 Short-circuiting the base case

Short-circuiting the base case, also known as arm’s-length recursion, consists of checking the base case beforemaking a recursive call – i.e., checking if the next call will be the base case, instead of calling and then checking forthe base case. Short-circuiting is particularly done for efficiency reasons, to avoid the overhead of a function call thatimmediately returns. Note that since the base case has already been checked for (immediately before the recursivestep), it does not need to be checked for separately, but one does need to use a wrapper function for the case whenthe overall recursion starts with the base case itself. For example, in the factorial function, properly the base case is0! = 1, while immediately returning 1 for 1! is a short-circuit, and may miss 0; this can be mitigated by a wrapperfunction.Short-circuiting is primarily a concern when many base cases are encountered, such as Null pointers in a tree, whichcan be linear in the number of function calls, hence significant savings for O(n) algorithms; this is illustrated below fora depth-first search. Short-circuiting on a tree corresponds to considering a leaf (non-empty node with no children)as the base case, rather than considering an empty node as the base case. If there is only a single base case, such asin computing the factorial, short-circuiting provides only O(1) savings.Conceptually, short-circuiting can be considered to either have the same base case and recursive step, only checkingthe base case before the recursion, or it can be considered to have a different base case (one step removed fromstandard base case) and a more complex recursive step, namely “check valid then recurse”, as in considering leafnodes rather than Null nodes as base cases in a tree. Because short-circuiting has a more complicated flow, comparedwith the clear separation of base case and recursive step in standard recursion, it is often considered poor style,particularly in academia.

Page 38: Recursion Ac

34 CHAPTER 6. RECURSION (COMPUTER SCIENCE)

Depth-first search

A basic example of short-circuiting is given in depth-first search (DFS) of a binary tree; see binary trees section forstandard recursive discussion.The standard recursive algorithm for a DFS is:

• base case: If current node is Null, return false

• recursive step: otherwise, check value of current node, return true if match, otherwise recurse on children

In short-circuiting, this is instead:

• check value of current node, return true if match,

• otherwise, on children, if not Null, then recurse.

In terms of the standard steps, this moves the base case check before the recursive step. Alternatively, these can beconsidered a different form of base case and recursive step, respectively. Note that this requires a wrapper functionto handle the case when the tree itself is empty (root node is Null).In the case of a perfect binary tree of height h, there are 2h+1−1 nodes and 2h+1 Null pointers as children (2 for eachof the 2h leaves), so short-circuiting cuts the number of function calls in half in the worst case.In C, the standard recursive algorithm may be implemented as:bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // base case else if (tree_node->data == i) return true; else return tree_contains(tree_node->left, i) || tree_contains(tree_node->right, i); }

The short-circuited algorithm may be implemented as:// Wrapper function to handle empty tree bool tree_contains(struct node *tree_node, int i) { if (tree_node ==NULL) return false; // empty tree else return tree_contains_do(tree_node, i); // call auxiliary function } // Assumestree_node != NULL bool tree_contains_do(struct node *tree_node, int i) { if (tree_node->data == i) return true;// found else // recurse return (tree_node->left && tree_contains_do(tree_node->left, i)) || (tree_node->right &&tree_contains_do(tree_node->right, i)); }

Note the use of short-circuit evaluation of the Boolean && (AND) operators, so that the recursive call is only madeif the node is valid (non-Null). Note that while the first term in the AND is a pointer to a node, the second term is abool, so the overall expression evaluates to a bool. This is a common idiom in recursive short-circuiting. This is inaddition to the short-circuit evaluation of the Boolean || (OR) operator, to only check the right child if the left childfails. In fact, the entire control flow of these functions can be replaced with a single Boolean expression in a returnstatement, but legibility suffers at no benefit to efficiency.

6.5.3 Hybrid algorithm

Recursive algorithms are often inefficient for small data, due to the overhead of repeated function calls and returns. Forthis reason efficient implementations of recursive algorithms often start with the recursive algorithm, but then switch toa different algorithm when the input becomes small. An important example is merge sort, which is often implementedby switching to the non-recursive insertion sort when the data is sufficiently small, as in the tiled merge sort. Hybridrecursive algorithms can often be further refined, as in Timsort, derived from a hybrid merge sort/insertion sort.

6.6 Recursion versus iteration

Recursion and iteration are equally expressive: recursion can be replaced by iteration with an explicit stack, whileiteration can be replaced with tail recursion. Which approach is preferable depends on the problem under consid-eration and the language used. In imperative programming, iteration is preferred, particularly for simple recursion,as it avoids the overhead of function calls and call stack management, but recursion is generally used for multiple

Page 39: Recursion Ac

6.6. RECURSION VERSUS ITERATION 35

recursion. By contrast, in functional languages recursion is preferred, with tail recursion optimization leading to littleoverhead, and sometimes explicit iteration is not available.Compare the templates to compute x defined by x = f(n, x -₁) from x ₐ ₑ:For imperative language the overhead is to define the function, for functional language the overhead is to define theaccumulator variable x.For example, the factorial function may be implemented iteratively in C by assigning to an loop index variable andaccumulator variable, rather than passing arguments and returning values by recursion:unsigned int factorial(unsigned int n) { unsigned int product = 1; // empty product is 1 while (n) { product *= n; --n;} return product; }

6.6.1 Expressive power

Most programming languages in use today allow the direct specification of recursive functions and procedures. Whensuch a function is called, the program’s runtime environment keeps track of the various instances of the function(often using a call stack, although other methods may be used). Every recursive function can be transformed intoan iterative function by replacing recursive calls with iterative control constructs and simulating the call stack with astack explicitly managed by the program.[10][11]

Conversely, all iterative functions and procedures that can be evaluated by a computer (see Turing completeness)can be expressed in terms of recursive functions; iterative control constructs such as while loops and do loops areroutinely rewritten in recursive form in functional languages.[12][13] However, in practice this rewriting depends ontail call elimination, which is not a feature of all languages. C, Java, and Python are notable mainstream languagesin which all function calls, including tail calls, cause stack allocation that would not occur with the use of loopingconstructs; in these languages, a working iterative program rewritten in recursive form may overflow the call stack.

6.6.2 Performance issues

In languages (such as C and Java) that favor iterative looping constructs, there is usually significant time and spacecost associated with recursive programs, due to the overhead required to manage the stack and the relative slownessof function calls; in functional languages, a function call (particularly a tail call) is typically a very fast operation, andthe difference is usually less noticeable.As a concrete example, the difference in performance between recursive and iterative implementations of the “fac-torial” example above depends highly on the compiler used. In languages where looping constructs are preferred, theiterative version may be as much as several orders of magnitude faster than the recursive one. In functional languages,the overall time difference of the two implementations may be negligible; in fact, the cost of multiplying the largernumbers first rather than the smaller numbers (which the iterative version given here happens to do) may overwhelmany time saved by choosing iteration.

6.6.3 Stack space

In some programming languages, the stack space available to a thread is much less than the space available in the heap,and recursive algorithms tend to require more stack space than iterative algorithms. Consequently, these languagessometimes place a limit on the depth of recursion to avoid stack overflows; Python is one such language.[14] Note thecaveat below regarding the special case of tail recursion.

6.6.4 Multiply recursive problems

Multiply recursive problems are inherently recursive, because of prior state they need to track. One example is treetraversal as in depth-first search; contrast with list traversal and linear search in a list, which is singly recursive andthus naturally iterative. Other examples include divide-and-conquer algorithms such as Quicksort, and functions suchas the Ackermann function. All of these algorithms can be implemented iteratively with the help of an explicit stack,but the programmer effort involved in managing the stack, and the complexity of the resulting program, arguablyoutweigh any advantages of the iterative solution.

Page 40: Recursion Ac

36 CHAPTER 6. RECURSION (COMPUTER SCIENCE)

6.7 Tail-recursive functions

Tail-recursive functions are functions in which all recursive calls are tail calls and hence do not build up any deferredoperations. For example, the gcd function (shown again below) is tail-recursive. In contrast, the factorial function(also below) is not tail-recursive; because its recursive call is not in tail position, it builds up deferred multiplicationoperations that must be performed after the final recursive call completes. With a compiler or interpreter that treatstail-recursive calls as jumps rather than function calls, a tail-recursive function such as gcd will execute using constantspace. Thus the program is essentially iterative, equivalent to using imperative language control structures like the“for” and “while” loops.The significance of tail recursion is that when making a tail-recursive call (or any tail call), the caller’s return positionneed not be saved on the call stack; when the recursive call returns, it will branch directly on the previously savedreturn position. Therefore, in languages that recognize this property of tail calls, tail recursion saves both space andtime.

6.8 Order of execution

In the simple case of a function calling itself only once, instructions placed before the recursive call are executed onceper recursion before any of the instructions placed after the recursive call. The latter are executed repeatedly afterthe maximum recursion has been reached. Consider this example:

6.8.1 Function 1

void recursiveFunction(int num) { printf("%d\n”, num); if (num < 4) recursiveFunction(num + 1); }

6.8.2 Function 2 with swapped lines

void recursiveFunction(int num) { if (num < 4) recursiveFunction(num + 1); printf("%d\n”, num); }

6.9 Time-efficiency of recursive algorithms

The time efficiency of recursive algorithms can be expressed in a recurrence relation of Big O notation. They can(usually) then be simplified into a single Big-Oh term.

Page 41: Recursion Ac

6.10. SEE ALSO 37

6.9.1 Shortcut rule

Main article: Master theorem

If the time-complexity of the function is in the formT (n) = a · T (n/b) +O(nk)

Then the Big-Oh of the time-complexity is thus:

• If a > bk , then the time-complexity is O(nlogb a)

• If a = bk , then the time-complexity is O(nk · logn)

• If a < bk , then the time-complexity is O(nk)

where a represents the number of recursive calls at each level of recursion, b represents by what factor smaller theinput is for the next level of recursion (i.e. the number of pieces you divide the problem into), and nk represents thework the function does independent of any recursion (e.g. partitioning, recombining) at each level of recursion.

6.10 See also• Ackermann function

• Corecursion

• Functional programming

• Hierarchical and recursive queries in SQL

• Kleene–Rosser paradox

• McCarthy 91 function

• Memoization

• μ-recursive function

• Open recursion

• Primitive recursive function

• Recursion

• Sierpiński curve

• Takeuchi function

6.11 Notes and references[1] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1: Recurrent Problems.

[2] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). p. 427.

[3] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 126.

[4] Felleisen, Matthias; Robert Bruce Findler; Matthew Flatt; Shriram Krishnamurthi (2001). How to Design Programs: AnIntroduction to Computing and Programming. Cambridge, MASS: MIT Press. p. art V “Generative Recursion”.

[5] Felleisen, Matthias (2002). “Developing Interactive Web Programs”. In Jeuring, Johan. Advanced Functional Program-ming: 4th International School. Oxford, UK: Springer. p. 108.

[6] Graham, Ronald; Donald Knuth; Oren Patashnik (1990). Concrete Mathematics. Chapter 1, Section 1.1: The Tower ofHanoi.

Page 42: Recursion Ac

38 CHAPTER 6. RECURSION (COMPUTER SCIENCE)

[7] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 427–430: The Tower of Hanoi.

[8] Epp, Susanna (1995). Discrete Mathematics with Applications (2nd ed.). pp. 447–448: An Explicit Formula for the Towerof Hanoi Sequence.

[9] Wirth, Niklaus (1976). Algorithms + Data Structures = Programs. Prentice-Hall. p. 127.

[10] Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 79, ISBN9781430232384.

[11] Drozdek, Adam (2012),Data Structures andAlgorithms in C++ (4th ed.), Cengage Learning, p. 197, ISBN9781285415017.

[12] Shivers, Olin. “The Anatomy of a Loop - A story of scope and control” (PDF). Georgia Institute of Technology. Retrieved2012-09-03.

[13] Lambda the Ultimate. “The Anatomy of a Loop”. Lambda the Ultimate. Retrieved 2012-09-03.

[14] “27.1. sys — System-specific parameters and functions — Python v2.7.3 documentation”. Docs.python.org. Retrieved2012-09-03.

6.12 Further reading• Dijkstra, EdsgerW. (1960). “Recursive Programming”. NumerischeMathematik 2 (1): 312–318. doi:10.1007/BF01386232.

6.13 External links• Harold Abelson and Gerald Sussman: “Structure and Interpretation Of Computer Programs”

• Jonathan Bartlett: “Mastering Recursive Programming”

• David S. Touretzky: “Common Lisp: A Gentle Introduction to Symbolic Computation”

• Matthias Felleisen: “How To Design Programs: An Introduction to Computing and Programming”

• Owen L. Astrachan: “Big-Oh for Recursive Functions: Recurrence Relations”

Page 43: Recursion Ac

6.14. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 39

6.14 Text and image sources, contributors, and licenses

6.14.1 Text• Anonymous recursion Source: https://en.wikipedia.org/wiki/Anonymous_recursion?oldid=628476829 Contributors: Michael Hardy,

AugPi, Jitse Niesen, Salix alba, Bisqwit, Donhalcon, Pintman, Chris the speller, Nbarth, Chrisamaphone, CBM, Abednigo, Quadrescence,Pcap, AvicBot and Anonymous: 3

• Bar recursion Source: https://en.wikipedia.org/wiki/Bar_recursion?oldid=602701644 Contributors: Bearcat, Ben Standeven, Myasuda,Blaisorblade, WereSpielChequers, Yobot, Qetuth, Wert7 and Anonymous: 1

• Corecursion Source: https://en.wikipedia.org/wiki/Corecursion?oldid=675326208 Contributors: The Anome, Malcohol, Greenrd, Walt-pohl, Rich Farmbrough, Ruud Koot, Seliopou, Gurch, Roboto de Ajvol, Hairy Dude, Piet Delport, SmackBot, Imz, Betacommand,Vvarkey, Nbarth, Furby100, Dreadstar, Lambiam, Macha, Jenovazero, PhilKnight, Rhwawn, Gwern, R'n'B, Icktoofay, Thefrob, Classi-calecon, Addbot, Ghettoblaster, Yobot, Jordsan, Obscuranym, Denispir, AnomieBOT, Citation bot, VladimirReshetnikov, Citation bot1, WillNess, H3llBot, Helpful Pixie Bot, BattyBot, ChrisGualtieri, Pimp slap the funk and Anonymous: 14

• Course-of-values recursion Source: https://en.wikipedia.org/wiki/Course-of-values_recursion?oldid=533176028Contributors: MichaelHardy, Andreas Kaufmann, Rajah, Ruud Koot, SmackBot, JonHarder, Lambiam, Macha, Iridescent, JoeBot, CBM, Cydebot, MarshBot,WinBot, Marc van Leeuwen, Brentsmith101 and Anonymous: 4

• Recursion Source: https://en.wikipedia.org/wiki/Recursion?oldid=677154277 Contributors: Damian Yerrick, AxelBoldt, The Anome,Tarquin, Taw, Grouse, Andre Engels, Eclecticology, XJaM, TobyBartels, FubarObfusco,Miguel~enwiki, Boleslav Bobcik,Mjb, Youandme,Stevertigo, Quintessent, Patrick, Michael Hardy, Mic, Ixfd64, Graue, TakuyaMurata, Rochus, Minesweeper, Pcb21, Tregoweth, Stevenj,JWSchmidt, Александър, Salsa Shark, Poor Yorick, Ghewgill, Mxn, Hashar, Revolver, Dcoetzee, Dino, Dysprosia, Malcohol, Greenrd,Jogloran, Wik, Furrykef, Hyacinth, Head, Wernher, Jonhays0, Khym Chanur, Rls, Jeanmichel~enwiki, Ldo, Robbot, Ruinia, Boffy b,Scarlet, Donreed, Bernhard Bauer, Altenmann, Chancemill, Gandalf61, Orthogonal, Ashley Y, DHN, Wlievens, Hadal, Netjeff, Diberri,Tobias Bergemann, David Gerard, Jimpaz, SimonMayer, Ancheta Wis, Giftlite, Mshonle~enwiki, N12345n, Kim Bruning, Wolfkeeper,Vfp15, Itay2222~enwiki, Wikiwikifast, Guanaco, Sundar, Khalid hassani, DemonThing, Knutux, Antandrus, Beland, Robert Brockway,AndrewKeenanRichardson, Icairns, Lumidek, Derek Parnell, Jh51681, Sonett72, Ratiocinate, Andreas Kaufmann, Zondor, Mernen, Alki-var, Freakofnurture, Slady, Lifefeed, Discospinster, Guanabot, Leibniz, Antaeus Feldspar, Deelkar, Paul August, Crtrue, PutzfetzenORG,SgtThroat, Noren, Bobo192, Nigelj, Marco Polo, Ray Dassen, Blonkm, R. S. Shaw, Obradovic Goran, Officiallyover, GatesPlusPlus,Jumbuck, Liao, Jezmck, Tabor, Yamla, MarkGallagher, Echuck215, Malo, ReyBrujo, Dominic, DV8 2XL, Mattbrundage, Kenyon, OlegAlexandrov, Crosbiesmith, Feezo, Weyes, Cyclotronwiki, Bjones, Linas, Mindmatrix, Camw, David Haslam, PoccilScript, Oliphaunt,Bkkbrad, Ruud Koot, Robertwharvey, Srborlongan, Ralfipedia, DeweyQ, Marudubshinki, GSlicer, SqueakBox, Graham87, Qwertyus,Kbdank71, TobyJ, Jclemens, Paul13~enwiki, Rjwilmsi, Koavf, Quiddity, Salix alba, Robmods, Toresbe, Windchaser, Mathbot, Nihiltres,Itinerant1, Fragglet, NekoDaemon, RexNL, Alvin-cs, BMF81, Mongreilf, Gwernol, Kakurady, Uriah923, Wavelength, RobotE, BillG,Jlittlet, RussBot, Michael Slone, Cmore, Piet Delport, Stephenb, DragonHawk, Deskana, Catamorphism, Nick, Jamesmcguigan, Larrylaptop, MarkSG, Harry Mudd, Dbfirs, The divine bovine, Fang Aili, JoanneB, WikiFew, That Guy, From That Show!, Robertd, Smack-Bot, RDBury, InverseHypercube, McGeddon, Unyoyega, Wegesrand, Jagged 85, Firstrock, Cachedio, NickGarvey, Persian Poet Gal,Thumperward, Neurodivergent, Stevage, HubHikari, Nbarth, Darth Panda, Signed in, Quaque, GeeksHaveFeelings, Jahiegel, Tamfang,Jefffire, Writtenright, Nixeagle, Snowmanradio, JonHarder, Caseydk, Jon Awbrey, Tethros, John Reid, TenPoundHammer, Lambiam,Derek farn, Agradman, Dlibennowell, MagnaMopus, Malixsys, Miketomasello, 16@r, A. Parrot, Remigiu, Davemcarlson, Mets501, Hil-verd, Tawkerbot2, Shirahadasha, Wolfdog, Gamma57, CRGreathouse, Ahy1, CBM, INVERTED, Myasuda, Gregbard, Dragon’s Blood,Tawkerbot4, DumbBOT, Thijs!bot, Epbr123, N5iln, Mojo Hand, Zyrxil, RobHar, Uruiamme, Escarbot, PhiLiP, Thadius856, SpongeSe-bastian, AntiVandalBot, Bm gub, Kaini, JAnDbot, Kaobear, Albany NY, Hut 8.5, Greensburger, Gavia immer, Acroterion, JNW, James-BWatson, Soulbot, Karl432, David Eppstein, Bwildasi, Slimeknight, Falcor84, Bitbit, Flaxmoore, Dennisthe2, Prgrmr@wrk, Meatbites,Patar knight, Trusilver, Milan95, Geehbee, Sanjay742, MONODA, Jdoubleu, Coppertwig, Chiswick Chap, Joshua Issac, Lxix77, Tigger-jay, Deor, Jehan60188, Yugsdrawkcabeht, Technopat, Zurishaddai, Una Smith, Ferengi, Martin451, BotKung, Hyrulio, SpecMode, Jesin,Sliskisty, Anishsane, Plusdo, Wassamatta, MCTales, Mallerd, Jantaro, Dogah, Ivan Štambuk, AlphaPyro, Malcolmxl5, Ham Pastrami,The.ravenous.llama, Taemyr, Todoslocos, Prestonmag, PhilMacD, Thehotelambush, Knavex, AlanUS, CBM2, Rinconsoleao, Escape Or-bit, Classicalecon, Luatha, Ricklaman, SlackerMom, ClueBot, Pi zero, SuperHamster, Boing! said Zebedee, Alpcr, Mijo34, DragonBot,BobManPerson, Vanmaple, M4gnum0n, Lantzy, Cenarium, Hidro, Botsjeh, Resuna, ChrisHodgesUK, Aoe3lover, Franklin.vp, Doriftu,XLinkBot, Marc van Leeuwen, Tombraider007, Ost316, Avoided, WikHead, Ziggy Sawdust, Addbot, Laudan08, Some jerk on theInternet, Fluffernutter, Watzit, Mohamed Magdy, Download, CarsracBot, NittyG, AnnaFrance, Favonian, Ozob, Tide rolls, Jarble, We-ganwock, Legobot, Luckas-bot, Yobot, Systemizer, Rockfan.by, Maldrasen, Obscuranym, Pcap, KamikazeBot, MJM74, AnomieBOT,Materialscientist, Maarwaan, RobertEves92, Citation bot, Obersachsebot, Xqbot, Apothecia, Zargontapel, Hydrated Wombat, Dushy-com, Shadowjams, Kamitsaha, Constructive editor, Spazturtle, Prari, Altg20April2nd, Thayts, Skychildandsonofthesun, OgreBot, Drjaye, DrilBot, Pinethicket, Σ, Coolaery, Xxx3xxx, Tachophile, Varmin, MusicNewz, عقیل ,کاشف Jonkerz, ArbitUsername, Genezis-tan, Benimation, Aoidh, Reaper Eternal, Suffusion of Yellow, Tbhotch, Reach Out to the Truth, Acu192, AleHitch, EmausBot, Kyxzme,Avenue X at Cicero, JeffreyAylesworth, ScottyBerg, RA0808, IceMarioman, Wikipelli, MithrandirAgain, PiemanLK, FinalRapture,Nightsideoflife, Coasterlover1994, Cookiefonster, L Kensington, Chewings72, Scientific29, Vikram360, Sven Manguard, Steveswikied-its, ClueBot NG, Josephshanak, JohnsonL623, Fourmi volage, Brainbelly, Snotbot, Frietjes, Parcly Taxel, Sage321, MerlIwBot, HelpfulPixie Bot, Barravian, Kinaro, Picklebobdogflog, Siddhesh33, Maharshi91, LionelTabre, Ashwiniborse, Mark Arsten, CottontailOfChrist,Altaïr, Aw.alatiqi, Mushi no onna, ChiisaiTenshi, Davidfreesefan23, Cheetahs1990, Khazar2, Tony Heffernan, Mogism, TalhaIrfanKhan,Lugia2453, Sriharsh1234, Brirush, Johnnypeebuckets, A.entropy, Lyxkg007, Carrot Lord, Pdecalculus, TheWisestOfFools, Vande957,Pizzakingme, Cyborg1981, Abhikpal2509, Aa508186 and Anonymous: 624

• Recursion (computer science) Source: https://en.wikipedia.org/wiki/Recursion_(computer_science)?oldid=677153969 Contributors:Edward, Michael Hardy, Kku, Dwo, Aenar, Thilo, Mattflaschen, Tobias Bergemann, Giftlite, Macrakis, Derek Parnell, Andreas Kauf-mann, Abdull, MMSequeira, Paul August, Nigelj, R. S. Shaw, Liao, DiegoMoya, Jeltz, Kenyon, Mahanga, Linas, Mindmatrix, MattGiuca,Ruud Koot, Graham87, Raymond Hill, Jclemens, Rjwilmsi, Leeyc0, Salix alba, Essayemyoung4009, CalPaterson, Quuxplusone, Ver,Chobot, Wavelength, Grafen, Dijxtra, Trovatore, Jpbowen, Rwalker, Tcsetattr, Cedar101, JLaTondre, RandallZ, Brahle, Linkminer,SmackBot, InverseHypercube, McGeddon, Bigbluefish, Philiprogers, Gilliam, LinguistAtLarge, Thumperward, Timneu22, Nbarth, DavidMorón, JonHarder, Tobyink, Clements, T-borg, Almkglor, Loadmaster, Valepert, Tmcw, AlainD, Shabbirbhimani, CRGreathouse, CBM,Pierre de Lyon, Cydebot, NotQuiteEXPComplete, BetacommandBot, Thijs!bot, Escarbot, PhiLiP, Gioto, Salgueiro~enwiki, Lfstevens,

Page 44: Recursion Ac

40 CHAPTER 6. RECURSION (COMPUTER SCIENCE)

JAnDbot, Magioladitis, Walkeraj, Abednigo, David Eppstein, R'n'B, Mange01, Maurice Carbonaro, Coppertwig, Shoessss, Mythrill,DaoKaioshin, Nxavar, Awl, Viggio, Pi is 3.14159, Volkan YAZICI, Sara.noorafkan, Hariva, Martarius, ClueBot, Cchantep~enwiki,Jeshan, Mild Bill Hiccup, Erudecorp, Tim32, Eboyjr, XLinkBot, Brentsmith101, Drolz09, Addbot, Ghettoblaster, Poco a poco, Mer-arischroeder, Btx40, Mohamed Magdy, Tide rolls, Fryed-peach, Luckas-bot, Quadrescence, Yobot, Pcap, KamikazeBot, Tempodivalse,AnomieBOT, Citation bot, GB fan, ArthurBot, GrouchoBot, RibotBOT, Constructive editor, Medanat, Citation bot 1, DrilBot, XxTim-berlakexx, RedBot, Babayagagypsies, MoreNet, WillNess, John lindgren, Forgefight, EmausBot, Tranhungnghiep, Chharvey, Bxj, Re-naissanceBug, Simurgia, Arnaud741, Carmichael, Brian Tansley, Walfredo424-NJITWILL, ClueBot NG, Widr, BG19bot, Uri-Levy,Chmarkine, Jon.weldon, TheJJJunk, Splendor78, Jochen Burghardt, Mark viking, Gratimax, Carrot Lord, Comp.arch, Monkbot, Peturb,Jacob p12, Lomotos10, Phạm Nguyễn Trường An, Paul STice, Rakshit93 and Anonymous: 181

6.14.2 Images• File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public do-

main Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs)• File:Droste.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/62/Droste.jpg License: Public domain Contributors: [4] [5]Original artist: Jan (Johannes) Musset?

• File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: TheTango! Desktop Project. Original artist:The people from the Tango! project. And according to themeta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (althoughminimally).”

• File:Fractal_fern_explained.png Source: https://upload.wikimedia.org/wikipedia/commons/4/4b/Fractal_fern_explained.pngLicense:Public domain Contributors: Own work Original artist: António Miguel de Campos

• File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0Contributors:Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:Tkgd2007

• File:RecursiveFunction1_execution.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/RecursiveFunction1_execution.png License: Public domain Contributors: I made it myself Original artist: User:Maxtremus

• File:RecursiveFunction2_execution.png Source: https://upload.wikimedia.org/wikipedia/commons/0/0d/RecursiveFunction2_execution.png License: Public domain Contributors: I made it myself Original artist: User:Maxtremus

• File:RecursiveTree.JPG Source: https://upload.wikimedia.org/wikipedia/commons/f/f7/RecursiveTree.JPG License: Public domainContributors: Own work Original artist: Brentsmith101

• File:Serpiente_alquimica.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/Serpiente_alquimica.jpg License: Pub-lic domain Contributors: cf. scan of entire page here. Original artist: anonymous medieval illuminator; uploader Carlos adanero

• File:Sierpinski_triangle.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/45/Sierpinski_triangle.svg License: CC BY-SA 3.0 Contributors: ? Original artist: ?

• File:Sourdough.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0a/Sourdough.jpg License: CC BY 4.0 Contributors:Own work Original artist: Janus Sandsgaard

• File:Text_document_with_red_question_mark.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Text_document_with_red_question_mark.svg License: Public domain Contributors: Created by bdesham with Inkscape; based upon Text-x-generic.svgfrom the Tango project. Original artist: Benjamin D. Esham (bdesham)

• File:Tower_of_Hanoi.jpeg Source: https://upload.wikimedia.org/wikipedia/commons/0/07/Tower_of_Hanoi.jpeg License: CC-BY-SA-3.0 Contributors: ? Original artist: ?

• File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Publicdomain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs),based on original logo tossed together by Brion Vibber

6.14.3 Content license• Creative Commons Attribution-Share Alike 3.0