It does! I followed the following tutorial to implement it:
foldrFromFor ::
(forall b f. Applicative f => [b] -> (b -> f ()) -> f ()) ->
forall a b.
(b -> a -> a) ->
a ->
[b] ->
a
foldrFromFor for_ f z bs =
runEndoApplicative z $ for_ bs $ \b -> mkEndoApplicative (f b)
Note that this definition does not use any f
other than Const (Endo a)
. We can relax it to
foldrFromForEndo ::
(forall a b. [b] -> (b -> Const (Endo a) ()) -> Const (Endo a) ()) ->
forall a b.
(b -> a -> a) ->
a ->
[b] ->
a
foldrFromForEndo for_ f z bs =
($ z) $ appEndo $ getConst $ for_ bs $ \b -> Const (Endo (f b))
Now compare it to foldl
definition:
foldlFromForState ::
(forall a b. [b] -> (b -> State a ()) -> State a ()) ->
forall a b.
(a -> b -> a) ->
a ->
[b] ->
a
What’s exactly the difference between Const (Endo a) ()
and State a ()
? Both are essentially a -> a
, the only mismatch is the order of composition. Const (Endo a) ()
is a reverse State
monad. We can define foldl
in the same style as foldr
:
foldlFromForEndo ::
(forall a b. [b] -> (b -> Const (Dual (Endo a)) ()) -> Const (Dual (Endo a)) ()) ->
forall a b.
(a -> b -> a) ->
a ->
[b] ->
a
foldlFromForEndo for_ f z bs =
($ z) $ appEndo $ getDual $ getConst $ for_ bs $ \b -> Const (Dual (Endo (flip f b)))
Alternatively one can require for_ :: (forall b f. Applicative f => [b] -> (b -> f ()) -> f ())
here.
Yes, that’s right. This is explored a little in the “References and commentary: Monoids” section. Note that foldlFromForState
can be adjusted to take an Applicative
-general argument, but forStateFromFoldl
can’t be adjusted to return an Applicative
-general result, so foldl
is not equivalent to traversing with anything.
EDIT: I endued up tweaking the section “References and commentary: Monoids" because there were a couple of bits around Const (Endo _)
which were a bit dubious.
I found the general rule about when to use foldr vs foldl’ a bit vague:
According to the Principle of Least Power you should use
foldl'
in preference tofoldr
when you can.
In contrast, here’s a slightly different proposition by @lexi.lambda update custom column names on renaming/dropping columns by rakeshkky · Pull Request #2933 · hasura/graphql-engine · GitHub (it’s a good read!)
When the accumulation function is strict, use foldl’ to consume the list in constant space, since the whole list is going to have to be traversed, anyway.
When the accumulation function is lazy in its second argument, use foldr to do work incrementally to improve streaming and work-saving.
Never use foldl or foldr’; they’re always worse on lists.
Note that the above only applies for lists. So maybe your rule is more general.
Also interesting wrt foldr optimization: Neil Mitchell's Blog (Haskell etc): foldr under the hood
I’m going to admit this is going to take me a few reads to understand haha.
I had a no idea lookup in base was written that way. Why is it this way with foldr CPS and not the ez-pz:
0 !? (x:_) = Just x
_ !? [] = Nothing
n !? (_:xs) = (n-1) !? xs
Feel free to ask any questions in the meantime. Maybe I can change some things I wrote to make it more clear.
I had a no idea lookup in base was written that way. Why is it this way
That’s a good question! It’s written that way so that the compiler can apply short cut fusion, a rewrite rule that leads to an optimization.
Thank you very much. Shortcut fusion is completely new to me so this is good reading.
The main takeaway I’m getting from the blog and wider reading is:
- Use
foldl'
when you must absolutely traverse the whole list, reason being you are very much interested in updating a state with each value and you want to do it in constant space. - Use
foldr
when your function is lazy in the second argument so you can return early.
The forStateFromFoldl
function was what was currently wrecking me.
What I’m reading is:
forStateFromFoldl ::
(forall a b. (a -> b -> a) -> a -> [b] -> a) ->
-- A function that will take a function, our accumulator (state), `a` and a list of `b` and return a new accumulator.
-- The inner function takes our accumlator, a value from our list of `b` and returns us a new accumulator
-- aka. foldl
forall a b.
[b] -> -- list of things to iterate through
(b -> State a ()) ->
-- A function that produces a new state from b
-- State a () is an alias for a function that goes ( a -> ( () , a ))
-- So really this: (b -> (a -> ( (), a))) where once we have made the State, we can call put to send what ever we have computed to the state
State a ()
forStateFromFoldl foldl bs f = do
z <- get -- Since the return value is a State we can use get to find the initial state supplied to function.
put (foldl g z bs) -- Fold through the bs and change our initial state to our result via function g, put the result into the state.
where
g a b = execState (f b) a
-- a is our accumulator; b is our value from the b-list we're working on
-- since f is a (b -> State a ()) function, we must use execState to return the new state for the foldl function to use with the next bs.
Now I’ve actually written it out I think it makes way more sense and is really cool .
If I’ve missed anything, please let me know!
If you’ve miraculously discovered a case where
foldl
is better then please tell me because I would be astonished to learn about it.
Here’s my attempt: a function like find
but that searches from right to left through the list:
findR :: (a -> Bool) -> [a] -> Maybe a
findR f = foldl (\xs x -> if f x then Just x else xs) Nothing
If you use foldl'
then you might be running the possibly expensive function f
more than necessary. Although admittedly the strict variant can be faster if f
is cheap and the list is long:
fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n - 1) + fib (n - 1)
main = defaultMain
[ bench "findR expensive" $ nf (findR (even . fib)) [20,19..1 :: Int]
, bench "findR' expensive" $ nf (findR' (even . fib)) [20,19..1 :: Int]
, bench "findR cheap" $ nf (findR even) [1..10000 :: Int]
, bench "findR' cheap" $ nf (findR' even) [1..10000 :: Int]
]
findR expensive: OK
87.6 ns ± 7.7 ns, 679 B allocated, 0 B copied, 6.0 MB peak memory
findR' expensive: OK
784 ns ± 52 ns, 327 B allocated, 0 B copied, 6.0 MB peak memory
findR cheap: OK
50.3 μs ± 1.1 μs, 313 KB allocated, 12 KB copied, 7.0 MB peak memory
findR' cheap: OK
41.3 μs ± 3.4 μs, 78 KB allocated, 2 B copied, 7.0 MB peak memory
Thanks. Would you mind adding the following to your benchmarks?
findR'' f =
foldr (flip (\xs x -> if f x then Just x else xs)) Nothing . reverse
Oddly enough slightly better than findR
:
findR'' expensive: OK
79.5 ns ± 7.8 ns, 519 B allocated, 0 B copied, 6.0 MB peak memory
findR'' cheap: OK
46.3 μs ± 4.1 μs, 234 KB allocated, 6.7 KB copied, 7.0 MB peak memory
But what’s the difference between foldl
and \k z -> foldr (flip k) z . reverse
?
Using reverse
breaks fusion:
findR expensive fused: OK
55.6 ns ± 3.5 ns, 671 B allocated, 0 B copied, 7.0 MB peak memory
findR cheap fused: OK
24.0 μs ± 1.8 μs, 312 KB allocated, 12 KB copied, 7.0 MB peak memory
findR' expensive fused: OK
691 ns ± 55 ns, 645 B allocated, 0 B copied, 7.0 MB peak memory
findR' cheap fused: OK
19.1 μs ± 1.6 μs, 156 KB allocated, 9 B copied, 7.0 MB peak memory
findR'' expensive fused: OK
238 ns ± 21 ns, 2.0 KB allocated, 0 B copied, 7.0 MB peak memory
findR'' cheap fused: OK
130 μs ± 6.8 μs, 937 KB allocated, 44 KB copied, 7.0 MB peak memory
That’s when I add these benchmarks:
, bench "findR expensive fused" $ nf (\(x,y,z) -> findR (even . fib) (enumFromThenTo x y z)) (20,19,1 :: Int)
, bench "findR cheap fused" $ nf (findR even . uncurry enumFromTo) (1,10000 :: Int)
what’s the difference between
foldl
and\k z -> foldr (flip k) z . reverse
?
foldl
builds a closure of size O(n) before proceeding to evaluate it. foldr ... . reverse
works in constant space (assuming the list can be garbage collected as it is reversed).
Taken literally my statement is, sadly, obviously false. I would like to find a criterion that distinguishes “genuine” uses of foldl
from uses that “should be” a foldr
on the reversed list, but I’m not sure what that criterion should be. My belief is still that “genuine” uses of foldl
should always be foldl'
instead.
This benchmark seems to contradict you:
{-# NOINLINE xs #-}
xs :: [Int]
xs = [1..10000]
triviall, trivialr :: [a] -> ()
triviall = foldl const ()
trivialr = foldr (flip const) () . reverse
{-# NOINLINE triviall #-}
{-# NOINLINE trivialr #-}
main = defaultMain
[ bench "trivial foldl" $ whnf triviall xs
, bench "trivial foldr.reverse" $ whnf trivialr xs
]
trivial foldl: OK
226 μs ± 17 μs, 390 KB allocated, 19 KB copied, 7.0 MB peak memory
trivial foldr.reverse: OK
67.3 μs ± 6.2 μs, 234 KB allocated, 6.7 KB copied, 7.0 MB peak memory
It does allocate slightly less, but it still allocates.
Oh foldr ... . reverse
definitely allocates. It allocates an entire new list! The question is the maximum residency. In principle foldr ... . reverse
can work in constant extra space, foldl
always uses O(n) extra space.
Ah, how about this criterion: I don’t know of a use of foldl
that shouldn’t be either foldl',
or foldr ... . reverse
. It be interested know if there are any counterexamples. I can just about believe that there are counterexamples for delicate performance reasons, but beyond that I feel confident that there aren’t.
I’m still not convinced so I did some heap profiling and got these pictures: (nevermind this, I was doing something wrong)
I’m still not convinced
Convinced about what? If you do an evaluation by hand with pen and paper I think you’ll be convinced about the overall principle. What GHC actually does in practice is another matter, and something I know less about.
I found the general rule about when to use foldr vs foldl’ a bit vague
According to the Principle of Least Power you should use
foldl'
in preference tofoldr
when you can.
@hasufell, yes, perhaps I should have connected the dots a bit more explicitly here. What I mean by “when you can” is “when you want to do what foldl'
does” according to the article, that is, traverse a list with (only) a state.
Thanks for linking Alexis’s post. I’ll link to it from my article. My problem with her rules of thumb is that they presuppose that you already have an (“accumulation”) function (which is also a misnomer: in the case of foldr
it doesn’t accumulate). That it, it’s a rule that you can use if you start with a function, to determine which fold to use with that function. By contrast, I’m trying to present a rule that you can use if you start with a problem, to determine which fold can solve your problem. If your problem is “traverse with (only) a state”-shaped then the answer is foldl'
. If it’s “traverse with anything else”-shaped then the answer is foldr
(or in either case you could just use for_
). Does that explanation help? If so I wonder if I can incorporate it into the article somehow, or perhaps write another article with more examples.
Hi @tobz619, yes, your description of how forStateFromFoldl
works is correct.
Use
foldl'
when you must absolutely traverse the whole list, reason being you are very much interested in updating a state with each value and you want to do it in constant space.
Yes, or you could just say “Use foldl'
when you’re traversing the list with (only) a state”.
Use
foldr
when your function is lazy in the second argument …
I’m trying to get away from talking about (accumulator) functions because that presupposes one already has one, and if one does then one is likely to already know if one needs foldl
or foldr
. Instead, I’d say “Use foldr
when you’re traversing a list in any way that requires having keeping track of (only) a state”.
That’s a good idea, working through the examples manually sheds more light on it:
foldr k z = go where
go [] = z
go (x:xs) = k x (foldr k z xs)
reverse l = go l [] where
rev [] a = a
rev (x:xs) a = rev xs (x:a)
const' _ y = y
foldr const' () (reverse (1:2:3:[]))
go (reverse (1:2:3:[]))
go (rev (1:2:3:[]) [])
go (rev (2:3:[]) (1:[]))
go (rev (3:[]) (2:1:[]))
go (rev [] (3:2:1:[]))
go (3:2:1:[])
const' 3 (go (2:1:[]))
go (2:1:[])
const' 2 (go (1:[]))
go (1:[])
const' 1 (go [])
go []
()
foldl k z = go z where
go a [] = a
go a (x:xs) = go (k a x) xs
foldl const () (1:2:3:[])
go () (1:2:3:[])
go (const () 1) (2:3:[])
go (const (const () 1) 2) (3:[])
const (const (const () 1) 2) 3
const (const () 1) 2
const () 1
()
Conclusion: both approaches work in constant (additional) space! And both approaches do O(n) allocations (closures or cons cells).
So your statement:
Is technically true, but it seems to imply that foldl
uses more space than foldr ... . reverse
, which is not true (assuming cons cells and closures are the same size, which is not quite true in practice).