Ackermann function

It is a known problem that Ackermann’s function is a pitfall for recursion (see)

The Haskell code that comes to mind is not efficient even for small values ​​of parameters

acker :: Integer -> Integer -> Integer
acker 0 n = n+1
acker m 0 = acker (m-1) 1
acker m n = acker (m-1) (acker m n-1)

why is the solution proposed here more effective ?

acks :: [[Int]]
acks = [ [ case (m, n) of
(0, ) -> n + 1
(
, 0) -> acks !! (m - 1) !! 1
(_, _) -> acks !! (m - 1) !! (acks !! m !! (n - 1))
| n <- [0…] ]
| m <- [0…] ]

Thank you in advance for any explanations

FYI, you can format code with triple back ticks:

```
some
code
here
```

or with 4 space indent:

    some
    code
    here

As for why the second function is so much faster, that is because it uses a technique called dynamic programming. Dynamic programming basically means that you store the result of the function for each required input. Then the algorithm never needs to recompute the result of a function that has already been calculated.

In the case of the ackermann function it means that you only have to do O(m * n) operations to calculate ackermann(m,n). Which is much faster than the first approach.

The way that dynamic programming is implemented in that example code is by using lazy lists. In Haskell every value is immutable, so when for example acks !! (m - 1) !! 1 has finished computing, its result value will remain in the list. When that same location is queried again it will not have to recompute that value.

This way of implementing dynamic programming probably takes slightly more time than O(m * n) because the list indexing takes linear time. You could use an m by n array to store the values to make it even faster.

But the ackermann function was invented to show the existence of extremely slow functions so implementing a dynamic programming algorithm kind of defeats its purpose.

2 Likes

I hadn’t thought of that. Your explanation is very clear. Many thanks to you