Pairs ordered by largest sum

That’s a nice observation. I wonder if my solution can be improved. Your pairSortedBySum is nice, but it is not lazy (even if I remove the sorting):

ghci> pairsSortedBySum (5:4:3:2:1:undefined)
*** Exception: Prelude.undefined

Whereas my solution only needs the first two numbers to produce a result:

ghci> f (5:4:undefined)
[(5,4)*** Exception: Prelude.undefined

Also, @emiruz your neophyte friendly version uses a merge function that is too strict. It should be lazy in the second argument, i.e.:

 merge (x:undefined) undefined = x : undefined
1 Like

My apologies – neophyte here after all – would you mind explaining what the above means? What would the appropriately lazy version look like? Someone somewhere will happen upon this code in the future; I want to leave it correct!

I’d be grateful if you could hint at the utility of the solution given the problem statement if its both equivalent in time complexity to a naive sort, and not lazy. I ask genuinely, as it isn’t obvious to me.

The equation I wrote means that the merge function should be able to produce one element of the result even if it only looks at the first element of the first argument. If you would run it in GHCi you should see this:

ghci> merge (5:undefined) undefined
[5*** Exception: Prelude.undefined

It fails with an exception, but it does first produce the 5 of the result.

For more info I’d recommend @HeinrichApfelmus’ excellent guide: apfelmus - The Incomplete Guide to Lazy Evaluation (in Haskell). Especially the last part shows how you can specify the laziness behavior of functions using equations like I wrote.

The lazy version would look like this:

  -- assumes the first element of `xs` is larger than the first element of `ys`
  merge [] ys = ys
  merge (x:xs) ys = x : go xs ys where
    go (x@(a,b):xs) (y@(c,d):ys)
      | a + b >= c + d = x : go xs (y:ys)
      | otherwise      = y : go (x:xs) ys
    go [] ys = ys
    go xs [] = xs

The reason that laziness is important in this case is that f is recursive so we end up with a nested call pattern like this:

merge xs1 (merge xs2 (merge xs3 ...))

If merge is lazy enough then this can immediately start producing results even if the nesting is arbitrarily deep.

One practical advantage of laziness is that it is faster if you only need the first few elements of the result. However, the strict version is faster if you know you’ll need the whole result. It’s a tradeoff. Although it would be nice if there could be a solution that is both lazy and fast. And currently that doesn’t seem impossible to me.

1 Like

Thanks, I need to stare at that for a while to understand why its “more lazy”. It feels to me that understanding how to use “laziness” is a large part of benefiting from Haskell, so I see this as something I should spend time understanding properly.

1 Like

Isn’t this a surprisingly interesting problem? I’m keen on problems that are straightforward to state yet intricate to solve. If “we” do come up with something decent here, I suspect it is likely to have other useful purposes.

I think one solution to get a lazy and efficient implementation is to use a balanced tree fold as described in: Balancing Folds - Donnacha Oisín Kidney

The idea is to switch from this nesting structure:

merge xs1 (merge xs2 (merge xs3 xs4)))

To a balanced binary tree structure:

merge (merge xs1 xs2) (merge xs3 xs4)

My reasoning about the performance is that each element of the resulting list has to propagate from the leaves of the structure to the root and each time it moves one level up it costs 1 unit of time. Then we can minimize the time by minimizing the distance between the leaves and the root, which is what a balanced tree structure does. That will make each element of the result cost only log n units of time in total. So we should get the optimal O(n^2 log n) asymptotic complexity.

Let me try it out.

Edit: It seems to work, here’s my new version (using treefold: Provides folds which try to combine elements in a balanced way. by the same author as the blog post):

import Data.Function
import Data.TreeFold
import Data.List

-- assumes the input is sorted from large to small
-- e.g.
-- >>> f [5,4,3,2,1]
-- [(5,4),(5,3),(5,2),(4,3),(5,1),(4,2),(4,1),(3,2),(3,1),(2,1)]
f :: [Int] -> [(Int,Int)]
f xs = treeFold merge [] $ map (\(x:xs) -> map (x,) xs) $ init $ tails xs where
  merge [] ys = ys
  -- assumes `x` has a bigger sum than the first element of `ys`
  merge (x:xs) ys = x : go xs ys where
    go (x@(a,b):xs) (y@(c,d):ys)
      | a + b >= c + d = x : go xs (y:ys)
      | otherwise      = y : go (x:xs) ys
    go [] ys = ys
    go xs [] = xs

It has better memory usage in GHCi:

Old:

ghci> length $ f $ reverse [1..100]
4950
(0.04 secs, 46,661,584 bytes)

New:

ghci> length $ f $ reverse [1..100]
4950
(0.02 secs, 16,600,472 bytes)

And benchmarks (with tasty-bench: Featherlight benchmark framework):

  f old: OK
    721  μs ±  11 μs, 6.7 MB allocated, 313 KB copied, 7.0 MB peak memory
  f new: OK
    418  μs ±  27 μs, 3.3 MB allocated,  98 KB copied, 7.0 MB peak memory

That’s a nice speed up, but less than 2x faster.

And kudos to @Bodigrim since his tasty-bench-fit: Determine time complexity of a given function predicts the complexity I expected:

old: 7.04e-10 * x ^ 3
new: 7.98e-9 * x ^ 2 * log x

But sadly it is not lazy any more:

ghci> f (5:4:undefined)
*** Exception: Prelude.undefined

I guess I need to write my own lazy tree fold.

3 Likes

Yay we’ve re-written naive sort again :slight_smile: I joke, but I appreciate the journey.

I think the laziness issue is due to the fact that treeFold constructs an almost perfectly balanced tree. But I think you can write a treeFold which uses a skew tree structure which is still lazy enough. Kind of like Okasaki’s famous random access list (I couldn’t quickly find a more accessible source), which is essentially a list of balanced trees that get deeper the further you go into the list.

This is very useful to know about.

For what its worth, I think the imperative version of this algorithm is fairly simple. Say the list is [5,4,3,2,1]. Lets create some stacks:

5,4
5,3  4,3
5,2  4,2  3,2
5,1  4,1  3,1  2,1

We take from the first stack, until its <= to the head of the second stack. We then take from the second stack until its <= the first 1 or 3 stack. We then take from the nth stack until its <= 1st to the n+1th stack, and so on until the end. I think that should be possible in the optimal time as an imperative algorithm. Meanwhile, you could imagine the stacks to be something like Python generators, so everything is still “lazy”.

I think this will also cause O(n^3) complexity. As you need to consider O(n) stacks for each element.

say the element is i, I think you need to consider 1 to i+1, so presumably that would be O(log(n)), no?

On the matter of sorting: Note that the quicksort recursion scheme together with lazy evaluation can give you the minimum of a list in O(n) (not O(1), sorry) time. See also:
https://apfelmus.nfshost.com/articles/quicksearch.html

1 Like

No then you have (n * (n - 1)) / 2 “stack considerations” in total (the triangular number formula) which is O(n^2).

2 Likes

I assume you meant O(n)?

1 Like

This works in O(n^2 log n) time and is lazy:

import Data.Function
import Data.List

mapPairs :: (a -> a -> a) -> [a] -> [a]
mapPairs f (x:y:zs) = f x y : mapPairs f zs
mapPairs _ xs = xs -- we ignore leftovers

treeFold :: (a -> a -> a) -> a -> [a] -> a
treeFold k z [] = z
treeFold k z (x:xs) = k x (treeFold k z (mapPairs k xs)) 

-- assumes the input is sorted from large to small
-- e.g.
-- >>> f [5,4,3,2,1]
-- [(5,4),(5,3),(5,2),(4,3),(5,1),(4,2),(4,1),(3,2),(3,1),(2,1)]
f :: [Int] -> [(Int,Int)]
f xs = treeFold merge [] $ map (\(x:xs) -> map (x,) xs) $ init $ tails xs where
  merge [] ys = ys
  -- assumes `x` has a bigger sum than the first element of `ys`
  merge (x:xs) ys = x : go xs ys where
    go (x@(a,b):xs) (y@(c,d):ys)
      | a + b >= c + d = x : go xs (y:ys)
      | otherwise      = y : go (x:xs) ys
    go [] ys = ys
    go xs [] = xs
1 Like

Your pairSortedBySum is nice, but it is not lazy (even if I remove the sorting):

This is due to the heap being a strict structure. With a sufficiently lazy heap, for instance a pairing heap, it is both lazy and O(n^2 log n). Sadly it becomes amortized O(log n) per element with a pairing heap, and I’m not aware of a lazy enough heap with guaranteed O(log n) pop, though it may be out there.

data Pairing a = Pairing a [Pairing a]
type EPairing a = Maybe (Pairing a)  -- Possibly empty

fromDescList :: [a] -> EPairing a
fromDescList = foldr (\ys acc -> Just $ Pairing ys $ maybe [] (:[]) acc) Nothing

-- Below are standard definitions from Wikipedia

maxView :: Ord a => EPairing a -> Maybe (a, EPairing a)
maxView = fmap $ \(Pairing x ps) -> (x, mergePairs ps)

insert :: Ord a => a -> EPairing a -> EPairing a
insert x = meld (Just (Pairing x []))

mergePairs :: Ord a => [] (Pairing a) -> EPairing a
mergePairs []       = Nothing
mergePairs [p]      = Just p
mergePairs (p:q:ps) = meld (meld (Just p) (Just q)) (mergePairs ps)

meld :: Ord a => EPairing a -> EPairing a -> EPairing a
meld Nothing p2 = p2
meld p1 Nothing = p1
meld (Just p1@(Pairing xs1 ps1)) (Just p2@(Pairing xs2 ps2))
  | xs1 > xs2 = Just (Pairing xs1 (p2 : ps1))
  | otherwise = Just (Pairing xs2 (p1 : ps2))

And the actual code


newtype DescList2 a = DescList2 { unDescList2 :: [(a,a)] } deriving Show

instance (Eq a, Num a) => Eq (DescList2 a) where
  a == b = error "_(:3」∠)_"

instance (Ord a, Num a) => Ord (DescList2 a) where
  compare (DescList2 a) (DescList2 b) = case (a,b) of
    ([]       ,[]       ) -> EQ
    (_        ,[]       ) -> GT
    ([]       ,_        ) -> LT
    ((x1,x2):_,(y1,y2):_) -> compare (x1+x2) (y1+y2)

pairsSortedBySumLazy :: (Num a, Ord a) => [a] -> [(a,a)]
pairsSortedBySumLazy =
  unfoldr f .
  fromDescList .
  descLists
  where
    descLists xs = [DescList2 (fmap (x,) ys) | x:ys <- tails xs]
    f pq = case maxView pq of
      Nothing -> Nothing
      Just (DescList2 xs, pq') -> case xs of
        []    -> Nothing
        x:xs' -> Just (x, insert (DescList2 xs') pq')
λ> pairsSortedBySumLazy ([5,4,3,2,1]++undefined)
[(5,4),(5,3),(4,3),(5,2),(4,2),(5,1)*** Exception: Prelude.undefined
λ> jaror ([5,4,3,2,1] ++ undefined) -- list merging
[(5,4),(5,3),(5,2),(4,3),(5,1)*** Exception: Prelude.undefined

pairsSortedBySumLazy might even be optimally lazy, though I haven’t tried verifying it.


I wasn’t aware of the treefold library, it looks interesting and I will try to see later what it’s about and how your new solution works.

1 Like

This whole blog is pretty amazing. A lot of bite-sized but powerful techniques described, and in a easy to understand language. Thanks @oisdk :slight_smile: