## Wednesday, December 12, 2007

A friend of mine, Peter (that I seem to manage to constantly piss off) thought about it on another level, saying you could achieve a _MILLION_ times better using a direct formula in C or Java, the Binet formula.

I decided to try as the improvement scale seemed a bit surprising. I first compared a Java recursive fibonacci with a Haskell one. Here are the results for Haskell GHC 6.6.1 vs Java 1.6.0 on Linux for fib(44):

Then I decided to check out the time for fib(44) or any fib at all, I was unable to measure precisely enough since it always came out as 0ms, in Haskell, or in Java. Looping out 10 million times, Java gave out 7.3s and Haskell something similar (but my method to loop 10 million times in Haskell is probably very bad).

The original post actually points to a link that describes various algorithms for Fibonacci. They basically say that for large n, the rounding is not precise enough, they also propose algorithms in log(n). I tried and was really impressed by the performance of those algorithms. Again I could not measure the difference for a single calculation between it and the binet formula as elapsed time is always 0. The binet formula becomes inexact already at n=71 in Java with doubles.

Of course the original post is still quite interesting, it shows how easy it can be to parallelize calculations in Haskell. But the example is silly as another algorithm can lead to 10 millions times the performance. Still Haskell performs well with the shit or good algorithm when compared to Java.

1. The problem with using Binet's formula is that making it exact by extending the precision of the numbers involved as n grows would basically kill the performance. You'd end up doing O(log(n)) multiplications, just like the fast integer versions, but likely with much larger constant factors.

Of course, as n grows, the cost of multiplication ends up being very important as well, as the numbers involved become very large (and in the case of Binet's formula, also very precise).

2. I think the point of the original post was to find some algorithm - any algorithm - that takes a lot of time. Then to show that relatively easy non-invasive changes can parallelize it and speed it up.

So I think it's beside the point to try to "optimize" the algorithm by completely rewriting it. The only reason the algorithm was chosen at all is *because* it requires a lot of computation.

3. If Haskell has memoization then fib is exactly the example you would need to choose to show off that feature to the detriment of other languages. The example proves nothing about the speed of Haskell.

4. Haskell doesn't have automatic memoisation of functions (nor should it). If you want to memoise something, you'll have to stick it in a constant explicitly.