Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the example, parallelization annotations were used and the speedup is not impressive.


It's a dual-core machine. The par annotations get you a factor of two at best.

Dons is still arguably pulling a fast one, though. Even though the Haskell program looks like it should behave in more-or-less the same way as the other two, the semantics are actually quite different. Haskell defaults to lazy evaluation with memoization -- hence the "naive" recursion isn't really naive. That's where the orders of magnitude come from.

That said, Haskell is still a pretty fast language, on par with Java.


It isn't memoizing anything here. If it were memoizing, do you think it would take 0.48 seconds? Laziness is not the same thing as memoization of function arguments.


Okay, apparently I didn't quite think this one through. I wasn't claiming that GHC memoizes values, but it does memoize thunks. I thought this algorithm would be getting some benefit from that, but having inserted some trace statements, I was apparently wrong.

Here's a version that really is memoized:

 import Control.Parallel
 import Control.Monad
 import Text.Printf

 cutoff = 35 
 fibs = [ fib n | n <- [0..] ]

 fib' :: Int -> Integer
 fib' 0 = 0
 fib' 1 = 1
 fib' n = fibs !! (n-1) + fibs !! (n-2)

 fib :: Int -> Integer
 fib n | n < cutoff = fib' n
       | otherwise  = r `par` (l `seq` l + r)
  where
     l = fibs !! (n-1)
     r = fibs !! (n-2)

 main = forM_ [0..45] $ \i ->
             printf "n=%d => %d\n" i (fib i)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: