Fengyun Liu

Different ways to calculate fibonacci numbers

NOTE: All code below are written in a dialect of Scheme called Racket.

Naive approach

(define (fibonacci n)
  (if (or (= n 1) (= n 2))
      1
      (+ (fibonacci (- n 1))
         (fibonacci (- n 2)))))

Problem: exponential complexity, even can’t calculate (fibonacci 1000).

Improved approach

(define (fibonacci-iter a b n)
  (if (= 0 n)
      (+ a b)
      (fibonacci-iter b (+ a b) (- n 1))))

(define (fibonacci n)
  (fibonacci-iter -1 1 n))

This version is not only faster, but also takes constant space, as the tail recursion can be optimized by compiler.

Problem: if (fibonacci 1000) is called twice, it performs the same computation twice!

Memoization

Idea: Use a table to remember computation resutls.

(define (memoize f)
  (let ([memo (make-hash)])
    (lambda (x)
        (or (hash-ref memo x #f)
            (hash-ref! memo x (f x))))))

(define memo-fibonacci
  (memoize (lambda (n)
             (if (or (= n 1) (= n 2))
                 1
                 (+ (memo-fibonacci (- n 1))
                    (memo-fibonacci (- n 2)))))))

Even faster

Idea: A faster approach to compute \(a^{10}\) is to compute \((a \times (a^2)^2)^2\), which only requires 4 steps of computation. If the power n is large, it can reduce the complexity logarithmically.

In the fibonacci-iter above, the operation a’ = b and b’ = a + b is performed n times, which is equivalent to multiply the matrix \(\bigl(\begin{smallmatrix} 0 & 1 \\ 1 & 1\end{smallmatrix}\bigr)\) n times. This fact allows us to perform the same optimization as in computing exponentiation.

You can verify that \(\bigl(\begin{smallmatrix} 0 \\ 1\end{smallmatrix}\bigr) \times \bigl(\begin{smallmatrix} 0 & 1 \\ 1 & 1\end{smallmatrix}\bigr)^n\) computes the n-th and (n+1)-th fibonacci numbers. The multiplier vector \(\bigl(\begin{smallmatrix} 0 \\ 1\end{smallmatrix}\bigr)\) just selects the element c and d from the result matrix \(\bigl(\begin{smallmatrix} a & b \\ c & d\end{smallmatrix}\bigr)\), so it can be omitted, as whenever we know the matrix, we know the result.

Another useful fact is that in the result matrix \(\bigl(\begin{smallmatrix} a & b \\ c & d\end{smallmatrix}\bigr)\) we have b = c and d = a + b. So the matrix can be written as \(\bigl(\begin{smallmatrix} a & b \\ b & a + b\end{smallmatrix}\bigr)\).

Also note that according to matrix multiplication, following formula holds:

\(\bigl(\begin{smallmatrix} a & b \\ b & a + b\end{smallmatrix}\bigr)^2 = \bigl(\begin{smallmatrix} a' & b' \\ b' & a' + b'\end{smallmatrix}\bigr)\) where \(a' = a^2 + b^2\) and \(b' = 2ab + b^2\).

From the mathematical facts above, the implementation comes naturally.

(define (fibonacci-iter n)
  (cond [(= n 1) (cons 0 1)]
        [(even? n) (let* ([res (fibonacci-iter (/ n 2))]
                          [a (car res)]
                          [b (cdr res)])
                      (cons (+ (* a a) (* b b))
                            (+ (* a b) (* a b) (* b b))))]
        [#t (let* ([res (fibonacci-iter (- n 1))]
                          [a (car res)]
                          [b (cdr res)])
                      (cons b (+ a b)))]))

(define (fibonacci n)
  (cdr (fibonacci-iter n)))

The version above works, but we can do better to use tail recursion.

(define (fibonacci-iter a b p q n)
  (cond [(= n 0) p]
        [(even? n) (fibonacci-iter (+ (* a a) (* b b))
                                   (+ (* a b) (* a b) (* b b))
                                   p
                                   q
                                   (/ n 2))]
        [#t (fibonacci-iter a
                            b
                            (+ (* a p) (* b q))
                            (+ (* a q) (* b q) (* b p))
                            (- n 1))]))

(define (fibonacci n)
  (fibonacci-iter 0 1 0 1 n))

The improved version utilizes another mathematical fact:

\(\bigl(\begin{smallmatrix} a & b \\ b & a + b\end{smallmatrix}\bigr) \times \bigl(\begin{smallmatrix} p & q \\ q & p + q\end{smallmatrix}\bigr) = \bigl(\begin{smallmatrix} a' & b' \\ b' & a' + b'\end{smallmatrix}\bigr)\) where \(a' = ap + bq\) and \(b' = aq + bq + bp\).

It’s also helpful to know that for any integer n and matrix X:

\(X^n = X^{b_0} \times (X^2)^{b_1} \times (X^4)^{b_2} \times (X^8)^{b_3} \cdots \times (X^{2^t})^{b_t}\) where \(b_i\) is either 0 or 1.

The formula above means that \(b_t \cdots b_2b_1b_0\) is the binary representation of n. The improved version uses p and q to accumulate the result of the first i terms, a and b remembers \(X^{2^i}\). If n can be divided by 2 for i times, it means the i-th position of n(i.e. \(b_i\)) is 0, then i can be increased, accumulator unchanged. Otherwise, it means the i-th position of n(i.e. \(b_i\)) is 1, then the accumulator p and q is updated, and i will be increased in the next recursion call as we’ve changed the last bit in \(b_t \cdots b_{i+1}b_i\) to 0 through n - 1. The process stops if n is equal to zero, which means for every i bigger than its current value, \(b_i\) is zero.

The complexity of this fast method is \(\Theta(log(n))\), compared to \(\Theta(n)\) of the second approach above.

Reference

  • SCIP section 1.2 and 3.3