more on converting numbers to the double-base number systemberthe/articles/rr04031-lirmm.pdf ·...
TRANSCRIPT
-
More on Converting Numbers to
the Double-Base Number System
Valérie Berthé1, Laurent Imbert1,2, and Graham A. Jullien2
1 LIRMM, CNRS UMR 5506
161 rue Ada, 34392 Montpellier cedex 5, France
2 ATIPS, CISaC, University of Calgary
2500 University drive NW, Calgary, AB, T2N 1N4, Canada
Research Report LIRMM-04031
October 2004
Abstract
In this article, we present new results on the problem of converting a number x from
binary to the double-base number system (DBNS), i.e. as a sum of mixed powers of 2 and
3. A greedy algorithm was previously proposed which requires lookup tables in order to
find the largest number of the form 2a3b less than or equal to x. We address this problem
by using results from the theory of continued fractions and diophantine approximation. An
approximation to the greedy algorithm is proposed for the conversion problem, and experi-
mental results demonstrate its efficiency. Although the material presented in this article is
mainly theoretical, the proposed algorithms could lead to very efficient implementations for
cryptographic or digital signal processing applications.
Keywords: Continued fractions, Diophantine approximation, Double-base number system
1
-
1 Introduction
The Double-Base number system (DBNS), introduced by V. Dimitrov and G. A. Jullien [1]
has advantages in many applications, such as cryptography [2] and digital signal processing [3].
Recently, in his Ph.D. dissertation [4], R. Muscedere proposed several hardware-based solutions
for the difficult operations in the Multi-Dimensional Logarithmic Number System (MDLNS),
which can be seen as a generalization of the DBNS. He addresses the problems of addition,
subtraction, and conversion from binary. Efficient methods have been proposed – using lookup-
tables with special range addressing schemes – for digital signal processing applications, where
the dynamic range of the numbers do not usually exceed 16-to-32 bits. However, such table-based
solutions become unrealistic to implement as the numbers grow in size, as with cryptographic
applications for example, and the techniques also seem to be quite difficult to generalize.
The main objective of this paper is to find a theoretical approaches to the problem of con-
verting a number from binary to DBNS. We tackle the problem using continued fractions,
Ostrowski’s number systems, and diophantine approximation.
In the second part of the paper, we propose an approximated algorithm for the conversion
from binary to DBNS which has many advantages over the classical approach.
In the Double-Base number system, we represent integers in the form
x =∑
i
2ai 3bi , (1)
where ai, bi are non-negative, independent integers. Following from B. M. M. de Weger’s defi-
nition of s-integers [5] – an integer is called s-integer if all of its prime divisors are among the
first s primes – we shall refer to numbers of the form 2a3b as 2-integers in the rest of the paper.
This representation is clearly highly redundant. For every integer x, the representations with
the minimum number of digits, i.e., the minimum number of 2-integers (less than, or equal to x)
are called the canonic double-base number representations. For example, 127 has 783 different
representations, among which 3 are canonic (with only three 2-integers).
127 = 2233 + 2132 + 2030 = 2233 + 2430 + 2031 = 2531 + 2033 + 2230.
In the rest of the paper we shall refer to the size of a DBNS representation, by the number of
digits, i.e., the number of terms in (1), that are required to represent a given integer.
2
-
An elegant way to vizualise DBNS numbers is to use a two-dimensional array, with, for
example, columns representing powers of 2 and rows representing powers of 3. Each term in (1)
is represented by a colored cell. The three canonic representations of 127 are given in Figure 1.
1 2 4 8
1
3
9
27
8 16421
1
3
9
27
8 16 32421
1
3
9
27
Figure 1: The three canonic DBNS representations of 127
The problem of finding the canonic DBNS representation of a given integer is a difficult
problem. In [6], V. Dimitrov and G. A. Jullien proposed a greedy algorithm which provides the
so-called near-canonic double-base number representation (NCDBNR). In this article, and for
reasons we shall explain further, we use the terms full-greedy, or F-greedy, to refer to Algorithm 1
presented bellow. It has been proved that F-greedy terminates in O(
log xlog log x
)iterations.
Algorithm 1 F-greedyInput : A positive integer x
Output : R, the sequence of exponents in the F-greedy DBNS representation of x
1: R ← ∅2: while x > 0 do
3: Find s = 2a3b, the largest 2-integer less than or equal to x
4: R ← R ∪ {a, b}5: x ← x − s6: end while
In this paper we investigate the problem of finding the largest 2-integer less than or equal
to x, required at each iteration of F-greedy. For small size numbers, the use of lookup tables
seems to be the best approach. However, for larger integers, it becomes unrealistic and other
solutions are required. Although it is not a difficult computational problem, we shall see that
our solution is much more efficient than performing an exhaustive search.
Let us define the problem more precisely. We try to find two non-negative integers a, b such
that 2a3b ≤ x, and among the solutions to this problem, 2a3b is the largest possible value, or
3
-
equivalently
2a3b = max{
2c3d, such that (c, d) ∈ N2, and 2c3d ≤ x}
. (2)
If we let a, b ∈ N be such that 2a3b ≤ x, our problem can be reformulated as finding non-negative integers a and b such that
a log 2 + b log 3 ≤ log x, (3)
and such that no other integers c, d ≥ 0 give a better left approximation to log x.Let us define α = log3 2 and β = {log3 x} = log3 x − �log3 x� (β is the fractional part of
log3 x). Then we try to find the best left approximation to log3 x with non-negative integers. If
a, b are solutions to this problem, then, for all c, d ∈ N2, with c = a, d = b, we have
c α + d < aα + b ≤ β + �log3 x� = log3 x. (4)
A graphical interpretation to this problem is to consider the line ∆ of equation v = −αu +log3 x. The solutions are the points with integer coordinates, located in the area defined by
the line ∆ and the axes. The best solution corresponds to the point with the smallest vertical
distance1 to the line.
2 Continued fractions and Ostrowski’s number system
To solve the problem of finding the largest 2-integer ≤ x, we use results from the theory ofcontinued fractions and diophantine approximation. We introduce the necessary mathematical
background in this section.
A simple continued fraction is an expression of the form
α = a0 +1
a1 +1
a2 +1
a3 + · · ·
,
where a0 = �α� and a1, a2, . . . are integers ≥ 1. The ais are called the partial quotients. Acontinued fraction is represented by the sequence (an)n∈N which can either be finite or infinite.
1We can equivalently consider the horizontal distance
4
-
Every irrational real number α can be expressed uniquely as an infinite simple continued
fraction, written as a compact abbreviated notation as α = [a0, a1, a2, a3, . . . ]. Similarly, every
rational number can be expressed uniquely as a finite simple continued fraction. For example,
the infinite continued fraction expansions of the irrationals π, e and log3 2 (that we shall use in
the remainder of this paper) are
π = [3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, . . . ],
e = [2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, . . . ],
log3 2 = [0, 1, 1, 1, 2, 2, 3, 1, 5, 2, 23, 3, . . . ].
The quantity obtained by restricting the continued fraction to its first n + 1 partial quotients
pnqn
= [a0, a1, a2, . . . , an]
is called the nth convergent of α. The numbers pn and qn satisfy the recurrence
pn+1 = an+1 pn + pn−1, qn+1 = an+1 qn + qn−1, (5)
which make them easily computable starting with p−1 = 1, q−1 = 0, p0 = 0, q0 = 1. They satisfy
the important identity
pk+1qk − pkqk+1 = (−1)k,
and it is known thatpnqn
→ α (n → ∞).
Thus, the sequence of convergents of an infinite continued fraction can be used to define rational
approximations of an irrational number. For example, the first convergents of π are listed in
Table 1.
Partial quotients Convergents Value
[3] 3 3.000000000[3, 7] 227 3.142857143[3, 7, 15] 333106 3.141509434[3, 7, 15, 1] 355113 3.141592920[3, 7, 15, 1, 292] 10399333102 3.141592653
Table 1: The first partial quotients and convergents of π
5
-
Ostrowski’s number system [7] is associated with the series (qn)n∈N of the denominators of
the convergents of the continued fraction expansion of an irrational number 0 < α < 1. The
following proposition holds.
Proposition 1 Every integer N can be written uniquely in the form
N =m∑
k=1
dk qk−1, (6)
where
0 ≤ d1 ≤ a1 − 1, and 0 ≤ dk ≤ ak for k > 1,
dk = 0 if dk+1 = ak+1.
For example, if α = 1+√
52 = [1, 1, 1, 1, . . . ] is the golden section, (qn)n∈N is the series of
Fibonacci numbers, and the condition dk = 0 if dk+1 = ak+1 corresponds to the fact that we do
not have two consecutive ones in the corresponding Zeckendorf representation [8].
Ostrowski’s representation of integers can be extended to real numbers [9]. The base is given
by the sequence (θn)n∈N , where θn = (qnα − pn). Note that the sign of θn is equal to (−1)n.
Proposition 2 Every real number β such that −α ≤ β < 1 − α can be written uniquely on theform
β =+∞∑k=1
bk θk−1, (7)
where
0 ≤ b1 ≤ a1 − 1, and 0 ≤ bk ≤ ak for k > 1,
bk = 0 if bk+1 = ak+1,
bk = ak for infinitely many even and odd integers.
Prop. 2 can be used to approximate β modulo 1 (by only considering the fractional parts)
by numbers of the form Nα. If we represent β using (7), the integers N which provide the best
approximations are given by the series
Nn =n∑
k=1
bk qk−1. (8)
Equation (8) provides a series of best approximations to β on both sides; i.e., the numbers
Nnα can either be greater or smaller than β (modulo 1). If we are only interested in the best
left approximations to β, we must consider the base (|θn|)n∈N. The following proposition holds.
6
-
Proposition 3 Every real number β such that 0 ≤ β < 1, can be written uniquely on the form
β =+∞∑k=1
ck|θk−1|, (9)
where
0 ≤ ck ≤ ak for k > 1,
ck+1 = 0 if ck = ak,
ck = ak for infinitely many even integers.In this case, the sequence of best left approximations is more difficult to define due to the
alternating signs of (θn)n∈N. In the next section, we present an algorithm, inspired by [10],
which solves this problem.
3 Definition of the sequence of inhomogeneous best approxima-
tions of β
Let 0 < α < 1 such that α = [0, a1, a2, . . . ] (α is irrational), and 0 < β ≤ 1 be given. Inhomoge-neous left approximations of β are numbers of the form kα + l less than or equal to β, with k, l
integers. It is clear that there are infinitely many such approximations. Here, we want to define
two increasing sequences of integers (kn)n∈N and (ln)n∈N, such that for all n ∈ N
0 < knα − ln < kn+1α − ln+1 < β,
and, furthermore, for all n, for all k < kn+1, k = kn, and for all l ∈ Z such that 0 ≤ kα− l ≤ β,then
0 < kα − l < knα − ln < β.
For simplicity, we define for all n, fn = |θn|. We have f−1 = 1, f0 = α, f1 = 1 − a1α, andfor all n > 1
fn+1 = fn−1 − an+1fn. (10)
The sequence (fn + fn+1)n∈N is decreasing, and since 0 < β ≤ 1, there exists a unique non-negative integer n such that
fn + fn+1 < β ≤ fn + fn−1. (11)
Before we give the algorithm that defines the series of best left inhomogeneous approximations
of β, we prove two lemmas.
7
-
Lemma 1 Let 0 < β ≤ 1 and (fn)n∈N be defined as above. Then, there exists a unique non-negative integer n, a unique non-negative integer c, and a unique real number e such that
β = cfn + fn+1 + e, (12)
with 0 < e ≤ fn, 1 ≤ c ≤ an+1 if n ≥ 1; and 1 ≤ c ≤ a1 − 1, if n = 0.
Proof: If n ≥ 1, then with fn + fn+1 < β ≤ fn + fn−1, and (10), we have fn < β − fn+1 ≤fn−1 + fn − fn+1 = (an+1 + 1)fn. If n = 0, then f0 + f1 < β ≤ 1 = f−1 = a1f0 + f1. We notethat a1 ≥ 2 in this case. �
Lemma 2 Let α be an irrational number such that 0 < α < 1 and α = [0, a1, a2, . . . ]. Also
define (pn)n∈N, (qn)n∈N, the sequences of the numerators and denominators of the convergents
of α. Let 0 < β ≤ 1 and (fn)n∈N be defined as above. n, c, e are the unique values satisfying (12).We define the integers k, l by setting
k = qn, l = pn, if n is even;
k = −cqn + qn+1, l = −cpn + pn+1, if n is odd,
where c is the unique integer greater than 1 given by (12). Then we have 0 < β − (kα − l) < β.
Proof: Assume first that n is even. We have β−(kα− l) = β−fn, and thus 0 < β−(kα− l) < β.Now, if n is odd, β − (kα− l) = β − [−c(qnα − pn) + (qn+1α − pn+1)] = β − cfn + fn+1 = e, andhence 0 < β − (kα − l) ≤ fn < β. �
Algorithm 2 computes the two sequences of non-negative integers (kn)n∈N, and (ln)n∈N such
that (knα − ln) corresponds to the inhomogeneous best left approximations to β.This algorithm is inspired by [11]. For similar algorithms, see [12, 13, 14]. Note that
β − (ki+1αli+1) is equal to ei if ni is odd, and to (c− 1)fni + fni+1 + ei, if ni is even. Hence, wemay have ni+1 = ni. This happens if and only if ni is even and ci > 1 ; this would then happen
(ci − 1) times before the sequence ni keeps growing to +∞.Let us prove that Algorithm 2 does actually provide the best left approximations to β.
Proposition 4 Let 0 < α < 1 irrational such that α = [0, a1, a2, . . . ], and 0 < β ≤ 1 begiven. Let (pn/qn)n∈N be the sequence of the convergents of α. Then, the increasing sequences
of integers (ki)i∈N and (li)i∈N given by the previous algorithm satisfy, for all i ∈ N,
0 < kiα − li < ki+1α − li+1 < β, (13)
8
-
Algorithm 2 Computes the sequence (knα− ln)n∈N of inhomogeneous best left approximationsto β.
With (fn)n∈N defined as above, we start with k0 = 0, l0 = 0, and we inductively define, ni, ci,
ei, ki, and li as follows: If
β − (kiα − li) = cifni + fni+1 + ei,
with 0 < ei ≤ fni , 1 ≤ ci ≤ ani+1, if ni > 0; and 1 ≤ ci ≤ a1 − 1, if ni = 0; then we define
ki+1 = ki + qni , li+1 = li + pni , if ni is even,
ki+1 = ki − ciqni + qni+1, li+1 = li − cipni + pni+1, if ni is odd.
and furthermore, for all i, for all ki < k < ki+1, and for all l ∈ Z, such that 0 ≤ kα − l ≤ β,then
0 ≤ kα − l < kiα − li < β. (14)
Proof: From Lemma 2, we have for all i, 0 < kiα− li < β. We first prove (13) by considering thecases ni even and ni odd separately. If ni is even, then β > ki+1α−li+1 = (kiα−li)+qniα−pni =(kiα − li) + fni > (kiα − li) > 0. We prove the case ni odd in a similar way. If ni is odd, thenβ > ki+1α − li+1 = (kiα − li) − ci(qniα − pni) + qni+1α − pni+1 = (kiα − li) + cifni + fni+1 >kiα − li > 0.
Let us now consider ki < k < ki+1, and l ∈ Z such that 0 ≤ kα − l ≤ β, and let us try toprove (14). By rewriting β − (kα − l), we have 0 ≤ β − (kα − l) = β − (kiα − li) + (kiα − li −ki+1α + li+1) + (ki+1α − li+1 − kα + l) ≤ β. What we prove in the next two cases that dependon the parity of ni, is that β − (kα − l) > β − (kiα − li).
• Let us first assume that ni is even. We have
β − (kα − l) = β − (kiα − li) − fni + (ki+1α − li+1 − kα + l).
Thus, what remains to be proved is that the last term (ki+1α − li+1 − kα + l) is greaterthan fni .
Since |ki+1 − k| < |ki+1 − ki| = qni, we have |(ki+1α − li+1 − kα + l)| > fni . (See [15] fora proof.)
9
-
To complete the proof, let us consider the sign of (ki+1α − li+1 − kα + l). From (11) andAlgorithm 2, we know that |β − (kiα − li) − fni | ≤ fni−1.
– If ki+1 − k = qni−1, then |(ki+1α − li+1 − kα + l)| > fni−1, and since 0 ≤ kα− l ≤ β,then we have (ki+1α − li+1 − kα + l) > 0.
– If ki+1 − k = qni−1, since ni − 1 is odd, we have (ki+1α − li+1 − kα + l) = (qni−1α −pni−1) = −fni−1 < 0. And we get β − (kα − l) < β − (kiα − li) − fni − fni−1 < 0,which contradicts our assumption.
• If we now assume that ni is odd, we have
β − (kα − l) = β − (kiα − li) − (cifni + fni+1) + (ki+1α − li+1 − kα + l).
Here, what remains to be proved is that the last term (ki+1α − li+1 − kα + l) is greaterthan cifni + fni+1.
Since |ki+1 − k| < |ki+1 − ki| = qni+1 − ciqni , we have |(ki+1α − li+1 − kα + l)| > fni+1 +cifni . Moreover, we also know from (12) and Algo. 2, that |β − (kiα − li) − cifni − fni+1| ≤fni . Thus, we have (ki+1α − li+1 − kα + l) > 0.
Thus, in both cases we have β − (kα − l) > β − (kiα − li). This concludes the proof. �
4 Explicit solution of the inhomogeneous approximation prob-
lem
As stated in the introduction, finding the largest 2-integer less than or equal to x is equivalent to
finding two non-negative integers a, b such that 2a3b ≤ x and among the many possible solutionsto this problem 2a3b takes the largest possible value.
Let a, b ∈ N be one of the solutions to the approximation problem, that is, such that 2a3b ≤ x.Clearly, we have
a log 2 + b log 3 ≤ log x.
If α = log3(2) (note that α is irrational and 0 < α < 1), and β = {log3(x)}, is the fractionalpart of log3(x) such that β = log3(x) − �log3(x)�, then the problem reduces to finding the twonon-negative integers a, b such that
aα + b ≤ β + �log3(x)�.
10
-
We note that a ≤ �log2(x)� and b ≤ �log3(x)�.We are thus looking for (p, q) ∈ N2 such that{
pα − q ≤ β,pα − q = max(r,s)∈N2 {rα − s ; 0 ≤ rα − s ≤ β, r ≤ �log2(x)�, s ≤ �log3(x)�} .
From p, q, we easily get the non-negative exponents a, b by setting a = p and b = �log3(x)� − q.
Proposition 5 Let x ∈ N be given. Let α = log3(2), (0 < α < 1 and α ∈ Q), β = {log3(x)}.Let n be such that kn ≤ �log2(x)� < kn+1. Let q = kn, p = ln. Then
pα − q = max(r,s)∈N2
{rα − s ; 0 ≤ rα − s ≤ β, r ≤ �log2(x)�, s ≤ �log3(x)�}
If a = p, and b = �log3(x)� − q, we get the expected result
2a3b = max{2c3d, such that (c, d) ∈ N2, and 2c3d ≤ x
}.
Proof: directly follows the proof of Prop. 4 in section 3.
5 Example
Let x = 23832098195. We want to define a, b ≥ 0 such that 2a3b is the largest 2-integer less thanor equal to x. Let α = log3(2) = 0.6309. We have β = {log3(x)} = 0.7495, �log3(x)� = 21. Weset k0 = 0, l0 = 0. The partial quotients and the corresponding convergents in the continued
fraction expansion of α are given in Table 2. Table 3 gives the first best inhomogeneous left
i 0 1 2 3 4 5 6
ai 0 1 1 1 2 2 3pi 0 1 1 2 5 12 41qi 1 1 2 3 8 19 65
fi = |qiα − pi| 0.630930 0.369070 0.261860 0.107211 0.047438 0.012335 0.010434
Table 2: Partial quotients of the continued fraction expansion of α = log3(2), and the corre-
sponding sequences (pi)i≥0, (qi)i≥0, and (|qiα − pi|)i≥0
approximations to β. The stop condition is reached at iteration 4 because we get a negative
ternary exponent (21 − 39 = −18). We thus retain the 3rd approxqimation which gives a = 17and b = 21 − 10 = 11. In order to find the DBNS representation of x, we then apply the
11
-
i β − (kiα − li) ni ci ei ki+1 li+10 0.7495 1 1 0.1186 1 01 0.1186 4 2 0.0114 9 52 0.0712 4 1 0.0114 17 103 0.0237 5 1 0.0009 63 39
Table 3: Best left approximations of β = 0.7495 with numbers of the form kα − l
same algorithm with the value x − 217311 = 613086611. For completeness, we give the DBNSrepresentation of x provided by the F-greedy algorithm.
x = 217311 + 27314 + 2738 + 2238 + 2930 + 2231 + 2031.
6 A-greedy: An approximate greedy algorithm
The greedy algorithm is not optimal, in the sense that it does not necessarily produce a canonic
DBNS representation. However, within this algorithm, the step which consists in finding the
largest 2-integer less than equal to the input x, is optimal.2 It seems then natural to investigate
the potential advantages of an approximate greedy algorithm, where we only perform a few
iterations of Algorithm 2 to define a 2-integer, although not the largest, less than or equal to x.
A-greedy is presented in Algorithm 3. It returns R, a DBNS representation of x, where only
d iterations of Algorithm 2 are performed at each step to define a ”good” 2-integer less than or
equal to x. In the following, we shall refer to d as the depth of search, or simply the depth. In
Algorithm 3 A-greedyInput : Two positive integers x, d
Output : R, the sequence of exponents in the A-greedy DBNS representation of x
1: R ← ∅2: while x > 0 do
3: Apply d iterations of Algorithm 2 to find s = 2a3b ≤ x4: R ← R ∪ {a, b}5: x ← x − s6: end while
2We have proved in the previous sections that Algorithm 2 returns the best left approximation to β.
12
-
the example presented in section 5, the successive best left approximations to x = 23832098195
are 21321, 2936, 217311. Figure 2 shows the DBNS representations obtained for x when A-greedy
is applied at depth 1,2, and 3. As expected, we remark that the number of digits increases as
0
0
2
21
(a) depth 1
16
0
0 10
(b) depth 2
0
14
0 17
(c) depth 3
Figure 2: DBNS representations of x = 23832098195 given by A-greedy at depth 1,2, and 3
respectively. Note that depth 3 is equivalent to the full-greedy solution in this case
the depth decreases; we get 12 digits at depth 1, 8 digits at depth 2, and 7 digits at depth 3
(note that depth 3 is equivalent to the full-greedy solution in this case). Also, the largest binary
exponent is smaller for smaller depth.
These results were to be expected. However, the following questions seems difficult to answeer
in the general case: By using the A-greedy algorithm at a given depth d, how many digits are
required to represent x compared to the solution provided by F-greedy? What decrease (resp.
increase) can we expect on the binary (resp. ternary) exponents? In order to answer these
questions, we have implemented the approximate greedy algorithm at different depths, for a
thousand randomly chosen numbers of sizes ranging from 64 to 512 bits. The results are shown
in Figures 3 to 6. Our experiments are summarized in Table 4. We first remark that for all
our tests (up to 512-bit integers), depth 5 is equivalent to the F-greedy algorithm. Which means
that we must perform 6 iterations of Algorithm 2 to get an approximation which corresponds
to a negative ternary exponent; i.e., the stop condition. By using the A-greedy algotithm, we
perform at most d iterations. It is possible that the stop condition occures before we reach the
desired depth, or would occur at the (d + 1)th iterations. Only in those two cases, we get the
13
-
-4
-2
0
2
4
6
8
10
12
14
16
0 100 200 300 400 500 600 700 800 900 1000
dist
. to
gree
dy /
# of
dig
its
# of experiments
full-greedydist. to greedy at depth 2
(a) depth 2
-4
-2
0
2
4
6
8
10
12
14
16
0 100 200 300 400 500 600 700 800 900 1000
dist
. to
gree
dy /
# of
dig
its
# of experiments
full-greedydist. to greedy at depth 3
(b) depth 3
Figure 3: Number of digits obtained with the full-greedy (upper curves) and its difference with
the approximated versions at depth 2,3, and 4, for 1000 randomly chosen 64-bits numbers
-4
-2
0
2
4
6
8
10
12
14
16
18
20
22
24
26
0 100 200 300 400 500 600 700 800 900 1000
dist
. to
gree
dy /
# of
dig
its
# of experiments
full-greedydist. to greedy at depth 3
(a) depth 3
-6
-4
-2
0
2
4
6
8
10
12
14
16
18
20
22
24
26
0 100 200 300 400 500 600 700 800 900 1000
dist
. to
gree
dy /
# of
dig
its
# of experiments
full-greedydist. to greedy at depth 4
(b) depth 4
Figure 4: Number of digits obtained with the full-greedy (upper curves) and its difference with
the approximated versions at depth 2,3, and 4, for 1000 randomly chosen 128-bits numbers
Size of x Approx # digits Avg. dist. to F-greedy at depth d(in bits) with F-greedy d = 1 d = 2 d = 3 d = 4 d = 5
64 12 9.08 2.35 0.47 0 -128 20 19.76 5.8 1.4 0.02 0256 35 - 14.5 4.6 0.5 0512 62 - - 12.14 2.2 0
Table 4: Average distance to the full-greedy solution at various depths. Note that depth 5 is
equivalent to the full-greedy algorithm in all our experiments
14
-
0
4
8
12
16
20
24
28
32
36
40
44
0 100 200 300 400 500 600 700 800 900 1000
dist
. to
gree
dy /
# of
dig
its
# of experiments
full-greedydist. to greedy at depth 3
(a) depth 3
-4
0
4
8
12
16
20
24
28
32
36
40
44
0 100 200 300 400 500 600 700 800 900 1000
dist
. to
gree
dy /
# of
dig
its
# of experiments
full-greedydist. to greedy at depth 4
(b) depth 4
Figure 5: Number of digits obtained with the full-greedy (upper curves) and its difference with
the approximated versions at depth 2,3, and 4, for 1000 randomly chosen 256-bits numbers
0
8
16
24
32
40
48
56
64
72
0 100 200 300 400 500 600 700 800 900 1000
dist
. to
gree
dy /
# of
dig
its
# of experiments
full-greedydist. to greedy at depth 3
(a) depth 3
-8
0
8
16
24
32
40
48
56
64
72
0 100 200 300 400 500 600 700 800 900 1000
dist
. to
gree
dy /
# of
dig
its
# of experiments
full-greedydist. to greedy at depth 4
(b) depth 4
Figure 6: Number of digits obtained with the full-greedy (upper curves) and its difference with
the approximated versions at depth 2,3, and 4, for 1000 randomly chosen 512-bits numbers
15
-
same result as the F-greedy algorithm, i.e. the largest 2-integer less than or equal to x. It is
also interesting to remark that at medium depth (d = 3 or 4), the size of the representations
can be very close to the F-greedy ones, and in some cases even better, i.e., with fewer digits.
Depending on the application, we might not need the F-greedy DBNS representation, but
rather satisfy with an approximated one, with slightly more digits, especially if the time required
for the conversion drops down. Another consequence of using A-greedy is that the binary
exponent is likely to be smaller at small depths than with F-greedy. It is clear that the ternary
exponent increases at the same time, but not as fast as the binary exponent decreases (the ratio
is α � 0.63).As an example of such application for which the A-greedy approach can be advantageous,
we have been investigating an exponentiation algorithm for cryptographic applications3, whose
complexity depends on the DBNS representation of some quantity e. The complexity of our
algorithm (the number of elementary operations) is equal to the largest binary exponent plus
the size of the DBNS representation of e. The largest ternary exponent only influences the
precomputations and memory requirements. In Table 5, we give a few examples, which clearly
show the gain we get with the A-greedy compared to the F-greedy solution for this specific
cryptographic application. We can remark that the loss in the precomputation step, directly
influenced by the largest ternary exponent, is very small.
7 Discussions
7.1 Performance comparisons
A straightforward approach to the problem of finding the largest 2-integer less than or equal to x
consists in computing the distance between the line ∆ of equation v = −αu+β for every integersu from 0 to �log2(x)�, and to keep the values (u, v) which lead to the smallest horizontal (orvertical) distance, i.e., the smallest fractional part of β −αu for all integers 0 ≤ u ≤ �log2(x)�.4
In Fig. 7, we have plotted the line ∆ of equation v = −0.6309u + 0.7495 which correspondsto the previous example, together with the points we have to scan if we perform the exhaustive
3We do not give any details about this algorithm in this article, since this research is currently under progress
and a journal paper is being prepared.4More efficiently, we can consider the line ∆′ : w = − log2(3)u + log2(x), and keep the minimum distance
among all integers 0 ≤ u ≤ �log3(x)�, simply because the function log3(t) grows faster than log2(t).
16
-
# digits Max bin Max tern # Operations Gain
F-greedy 35 192 134 227A-greedy (d = 3) 38 75 154 113 114
F-greedy 36 231 137 267A-greedy (d = 3) 36 132 154 168 99
F-greedy 33 199 151 232A-greedy (d = 3) 40 75 151 115 117
F-greedy 35 227 147 262A-greedy (d = 2) 48 67 148 115 147
F-greedy 33 189 145 222A-greedy (d = 2) 46 70 152 116 106
F-greedy 35 170 149 205A-greedy (d = 2) 54 65 159 119 86
Table 5: Gain achieved with the A-greedy compared to the F-greedy for specific cryptographic
application using 256-bit numbers
search as explained in the previous paragraph, and those we deduce from Algorithm 2. We
clearly remark that the algorithm based on continued fractions and Ostrowski’s number system
we have presented in section 3 only scans four possible solutions, whereas the straightforward
algorithm must scan all the points on a discrete line under ∆.
For large values of x, Algorithm 2 is much faster than the exhaustive search. Our ex-
periments, on thousands of randomly chosen numbers, show a speedup of about 35%. The
approximate version presented in section 6 provides even better improvements.
7.2 Precomputations
Continued fraction expansions are generally expensive to compute. Hopefully, in our case,
α = log3 2 only depends on the DBNS’ parameters, i.e., the bases 2 and 3. The partial quotients,
ai, the convergents, pi, qi, and the sequence (fi)i = (|qiα − pi|)i can thus be precomputed andstored in lookup tables. The number of elements we have to precompute depends on the size of
the input value x we want to convert. Assume that x is an n-bit integer, i.e., 2n ≤ x < 2n+1.Since the binary exponent, a, satisfies 0 ≤ a ≤ �log2 x�, we have 0 ≤ a < n + 1. Our algorithmreturns the digits of a in the Ostrowski representation. Thus a =
∑mi=1 ciqi−1 < n + 1. Prop. 1
ensures the uniqueness of Ostrowski’s representation. Thus, m, the maximum index is the value
i such that qi < n + 1 ≤ qi+1. For example, if 264 ≤ x < 265, since q6 = 65, we only need the
17
-
(9,16)
(0,21)(1,21)
(17,11)
9 351710
22
16
0
11
Figure 7: Graphical interpretation of the problem of finding the largest 2-integer less than (or
equal) to x = 23832098195 and the points scanned using both the straightforward approach and
the proposed algorithm
denominators q1 to q5 to represent a. If m is the maximum required index, we thus need to
precompute the sequences pn and qn up to qm+1. (We recall that when ni is odd, Algorithm 2
performs ki+1 = ki − ciqn1 + qni+1.) Table 6 gives the number of elements of the sequences ai,pi, qi, fi that have to be stored based on the size of the input number x. The conclusion is that
Size of x (in bits) # Elements
8 to 18 519 to 64 665 to 83 784 to 483 8
484 to 1053 9
Table 6: Number of elements of the sequences ai, pi, qi, and fi that must be precomputed based
on the size of the input number x
g
for most applications, both the F-greedy and A-greedy algorithms can be implemented at a very
low memory cost.
18
-
8 Conclusions
Using theoretical results from diophantine approximation, we proposed an efficient solution to
the problem of findind the largest 2-integer less than or equal to a positive integer x, when
lookup tables cannot be used. This operation is required at each step of a greedy algorithm
which converts binary numbers into DBNS. An approximated version of the greedy algorithm,
called A-greedy, is also presented, which only looks for a ”not-too-bad” 2-integer less than x
at each step. Experimental results illustrate the efficiency and potential advantages of our
approach for specific applications where large numbers are required such as in cryptography.
Others applications, for example in digital signal processing, can undoubtedly take advantages
of this approximated greedy solution. This study will be investigated further to answer some
more difficult questions related to the double-base number system and its generalization, the
multi-dimensional logarithmic number system.
References
[1] V. S. Dimitrov, G. A. Jullien, and W. C. Miller. Theory and applications of the double-base
number system. IEEE Transactions on Computers, 48(10):1098–1106, October 1999.
[2] V. S. Dimitrov and T. V. Cooklev. Two algorithms for modular exponentiation based on
nonstandard arithmetics. IEICE Transactions on Fundamentals of Electronics, Communi-
cations and Computer Science, E78-A(1):82–87, January 1995. Special issue on cryptogra-
phy and information security.
[3] V. S. Dimitrov, J. Eskritt, L. Imbert, G. A. Jullien, and W. C. Miller. The use of the multi-
dimensional logarithmic number system in DSP applications. In 15th IEEE symposium on
Computer Arithmetic, pages 247–254, Vail, CO, USA, June 2001.
[4] R. Muscedere. Difficult Operations in the Multi-Dimensional Logarithmic Number System.
PhD thesis, University of Windsor, Windsor, ON, Canada, 2003.
[5] B. M. M. de Weger. Algorithms for Diophantine equations, volume 65 of CWI Tracts.
Centrum voor Wiskunde en Informatica, Amsterdam, 1989.
19
-
[6] V. S. Dimitrov, G. A. Jullien, and W. C. Miller. An algorithm for modular exponentitation.
Information Processing Letters, 66(3):155–159, May 1998.
[7] J.-P. Allouche and J. Shallit. Automatic Sequences. Cambridge University Press, 2003.
[8] E. Zeckendorf. Représentation des nombres naturels par une somme de nombres de Fi-
bonacci ou de nombres de Lucas. Fibonacci Quarterly, 10:179–182, 1972.
[9] V. Berthé. Autour du système de numération d’Ostrowski. Bulletin of the Belgian Mathe-
matical Society, 8:209–239, 2001.
[10] V. Berthé, N. Chekhova, and S. Ferenczi. Covering numbers: Arithmetics and dynamics
for rotations and internal exchanges. Journal d’Analyse Mathématique, 79:1–31, 1999.
[11] N. B. Slater. Gaps and steps for the sequence nθ mod 1. Mathematical Proceedings of the
Cambridge Philosophical Society, 63:1115–1123, 1967.
[12] V. T. Sós. On the theory of diophantine approximations. I. Acta Mathematica Hungarica,
8:461–472, 1957.
[13] V. T. Sós. On the theory of diophantine approximations. II. Acta Mathematica Hungarica,
9:229–241, 1958.
[14] V. T. Sós. On a problem of Hartman about normal forms. Colloquium Mathematicum,
7:155–160, 1960.
[15] J. W. S. Callels. An Introduction to Diophantine Approximation. Cambridge University
Press, 1957.
20