Useful applications of partial application

I am interested in finding useful applications of partial application. I’m pretty sure I understand partial application but what I’d like to be able to do is say to a student (or indeed a teacher - this is at the UK’s sixth form level) that this is one use of it, and what its advantages are.

Is there an efficiency advantage? Given a function f a b, if a is costly to execute, then does defining g = f a then mean that g b is more efficient?

I can see some benefits in generating specific ‘versions’ of a function from a readability point of view, but I’m interested in general patterns that people use ‘in the wild.’

Many thanks.

John

1 Like
-- f :: a -> b -> c
cs = map (f a) listOfBs

Is the usual example in the wild. Partial application being cheap makes it extremly comfortable to use with higher order functions.

3 Likes

Thanks for the quick reply.

I can see your point, but I think I want something at a higher level of ‘abstraction.’ Couldn’t I do that with a lambda? Using your example, is this:

f: Num a => a → a → a
f a b = a + b

map (f 2) [1 … 10]

any different in essence to

map (\b → 2 + b) [1 … 10]

ignoring questions of how you get the value of a if it isn’t a literal? Obviously it’s shorter, though not if I have to define f separately.

I suppose what I’m looking for is a bit (or at least a reasonable amount) of code using partial application that’s ‘real’, and perhaps the author explaining why they did it that way (was there an alternative? does it look neater? is the execution time smaller? …)

Thanks.

John

For a short function, it is definitely correct (lambdas are very cheap too in Haskell!), but imagine you have a longer function:

searchSomething :: [Entries] -> a -> Result
searchSomething = … -- very long definition

then you cannot just paste the definition into a lambda, but:

map (searchSomething myDict) as

is short, readable. Pointfree style is another place were partial application comes more natural:

filter ((== needle) . searchSomething myDict) as
-- ^-- a partially applied function and a section. Again very readable, very natural, no need
-- for parameters (which could get shadowed, naming vars is one of the trickiest things
-- in programming, right?).

Not having to come up with a fresh variable name is very convenient when writing code and very very when reading code, less cruft to keep in mind.

I am sure more experienced haskellers will come up with additional benefits.

My favourite example is map (map sin) [[0.1, 0.2], [0.3, 0.4]].

Sure you can use map (\x -> map sin x) [[0.1, 0.2], [0.3, 0.4]].

But there is also value in regarding map sin as an entity in its own right. (And there is value in not.)

The maximum meta-value is in appreciating both perspectives.

If you have some “logic” in your program of the form (b -> c) -> d and some “strategy” function a -> b -> c (think of the a as perhaps some configuration read from a file) then the most natural way to combine the two is by partial application.

A less pleasant alternative would be to change the logic into something like a -> (a -> b -> c) -> d. That is, make the logic accept both the “configuration” and the a -> b -> c strategy function.

That is noticeably uglier, but a more covert variant is having the logic be something like a function a -> d that threads the a value through its implementation, and internally calls a statically known function a -> b -> c at some point.

In such cases, it’s often better to disentangle knowledge of a from the logic and refactor the logic to something like (b -> c) -> d with the help of partial application.

Why? If the logic only carries around the a value to eventually obtain a b -> c value, that a is likely an “implementation detail” of the strategy , and the logic would probably be cleaner if it took the partially applied b -> c function as a parameter. Under this view, partial application is related to dependency injection.

When the functions are effectful/monadic, partial application is even more useful: you can sneak in additional useful effects (tracing, debugging…) by modifying the partially applied “strategy” you pass to the logic, without tweaking the logic itself. When the logic only accepts the “configuration” value and feeds it internally to statically known functions, that opportunity for tweaking behavior does not exist.

It’s the difference between passing a String -> IO () logging function to your logic, and passing a Handle to your logic and making it invoke a Handle -> String -> IO () function internally.

In general, if I have a long chain of function applications, a x . b y . c z . ... then almost every function in the chain would be partially applied. The “small” readability benefit adds up immensely!

Many thanks for this, which is the sort of high-level justification for partial application that I was hoping for. Some of the language is too advanced for my (current) Haskell knowledge so I’ll have to go away and think a bit more about it.

@sclv’s observation about using the composition operator (.) appears elsewhere - here are some more examples from the Haskell98 Report:

break :: (a -> Bool) -> [a] -> ([a],[a])
break p = span (not . p)

any, all :: (a -> Bool) -> [a] -> Bool
any p = or . map p
all p = and . map p

These aren’t defined using (.), but since we’re looking at the Report:

unlines :: [String] -> String
unlines = concatMap (++ "\n")

reverse :: [a] -> [a]
reverse = foldl (flip (:)) []

Now just imagine having to write out:

break p l = span (\x -> (not . p) x) l

any p l =  (or . map p) l
all p l = (and . map p) l

unlines l = concatMap (\l -> l ++ "\n") l

reverse l = foldl (\x y -> flip (:) x y) [] l

Partial applications: once you’re used to them, you really notice their absence in other languages…

1 Like

I personally think naming parameters is quite a burden. Single character binding sucks. This is why I prefer partial application.

1 Like

To be honest, I don’t think the non-partially applied versions are much less readable.

Also I’d like to think that even in non-curried languages the (.) operator would be defined with 2 arguments and not with 3 arguments. So you’d still be able to write (I’ll write it with C style function application to make my point clearer):

break(p, l) = span(not . p, l)

Also, I think flip should be defined with 1 argument and not 3, and foldl with 2 arguments and not 3, so you’d be able to write:

reverse = foldl(flip((:)), [])

Or pointful:

reverse(l) = foldl(flip((:)), []))(l)

Note that this doesn’t require automatic currying where all arguments are always curried. It just requires being able to return functions as return type of a function. Not having automatic currying means f(x1,...,xn)(y) is not interchangable with f(x1,...,xn,y), but it doesn’t mean functions cannot return functions as output.

Also for flip in particular, it might be more readable to write a lambda anyway:

reverse(l) = foldl(\xs x -> x : xs, []))(l)

Edit: I’m now realizing that what I wrote above is actually against automatic currying and not against partial application (which could indeed really mean functions cannot return functions as output). If you really don’t want to use any partial application then the code would look like this:

break(p, l) = span(\x -> not(p(x)), l)

any(p, l) =  or(map(p, l))
all(p, l) = and(map(p, l))

unlines(l) = concatMap(\l -> l ++ "\n", l)

reverse(l) = foldl(\xs x -> x : xs, [], l)

(flip and (.), don’t make sense without partial application i.m.o.)

Still, I don’t think this is that much more cumbersome to read and write.