The topic of the merits of effect systems is very interesting but has diverged significantly from the original topic. Perhaps a mod would be able to split the recent posts that are not about Bluefin and effectful into a new thread?
You made a lot of good points, but this one got me thinking. Do you propose record of functions as a good alternative to effect systems?
But if that is the case, Bluefin
actually is a fancy record of functions solution!
write :: Connection -> Foo -> IO ()
write :: (e :> es) => Connection e -> Foo -> Eff es ()
I don’t understand how one write
is modular and reusable, while the other isn’t. Both accept arguments that are going to be used to perform some action. Both exist in codebases where lots of stuff is in IO
or Eff
respectively. The only difference I see is that IO
allows running tons of code, while Eff
is restrictive to what record Connection
allows.
Even more so, many of your points apply to Bluefin
the same way as they apply to records of functions. How come one is closer to AbstractEffectFactory
than the other?
I think what makes Haskell unattractive to newbies is the lack of simple guidelines on how to structure applications. Haskell Spring could use the fanciest effect library. It would be fine, as long as it takes you by the hand, like Java Springs does.
Meanwhile, I still see mtl
recommendations, I guess for the lack of better “standard”.
Java is great at having good ideas and making them look bad. It made a whole generation of developers fear “abstractions”.
Java doesn’t have IO
type, but they do use a lot of some sort of record of function passing to relevant objects, but usually automated with annotations.
I like this idiom propsed by Ollie Charles (@ocharles your SSL cert needs renewing btw).
class Monad m => MonadWhatever m where
liftWhatever :: Free (Coyoneda WhateverAPICall) a -> m a
Unless you need to take in monadic arguments for some reason, it’s quite flexible and lets other effect libraries provide the necessary instances without you having to depend on every effect library in the library’s core package.
No, checked exceptions are a bad idea:
-
As mentioned earlier, from The C++ Programming Language, 4th edition (2011):
-
There’s also these observations from 2003:
-
Let’s start with versioning, because the issues are pretty easy to see there. Let’s say I create a method foo that declares it throws exceptions A, B, and C. In version two of foo, I want to add a bunch of features, and now foo might throw exception D. It is a breaking change for me to add D to the throws clause of that method, because existing caller of that method will almost certainly not handle that exception.
Adding a new exception to a throws clause in a new version breaks client code. […]
-
The scalability issue is somewhat related to the versionability issue. In the small, checked exceptions are very enticing. With a little example, you can show that you’ve actually checked that you caught the FileNotFoundException, and isn’t that great? Well, that’s fine when you’re just calling one API. The trouble begins when you start building big systems where you’re talking to four or five different subsystems. Each subsystem throws four to ten exceptions. Now, each time you walk up the ladder of aggregation, you have this exponential hierarchy below you of exceptions you have to deal with. You end up having to declare 40 exceptions that you might throw. And once you aggregate that with another subsystem you’ve got 80 exceptions in your throws clause. It just balloons out of control.
In the large, checked exceptions become such an irritation that people completely circumvent the feature. […]
-
So how many more programming languages will have to avoid them before people are convinced that checked exceptions are a bad idea?
class Monad m => MonadWhatever m where liftWhatever :: Free (Coyoneda WhateverAPICall) a -> m a
New Haskellers are having enough problems with the monadic interface:
Still, today, over 25 years after the introduction of the concept of monads to the world of functional programming, beginning functional programmers struggle to grasp the concept of monads. This struggle is exemplified by the numerous blog posts about the effort of trying to learn about monads. From our own experience we notice that even at university level, bachelor level students often struggle to comprehend monads and consistently score poorly on monad-related exam questions.
…do they really need more problems by “giving” them extra artefacts from one of the most abstract branches of mathematics?
…do they really need more problems by “giving” them extra artefacts from one of the most abstract branches of mathematics?
Ollie’s post shows the rest of the interface; the Coyoneda stuff is an implementation detail that end users don’t need to interact with.
There’s also these observations from 2003:
I do actually mention this article here. Although I focus on different aspects of it, because some are not very convincing.
Versioning one just smells to me like a mix of magical thinking and java-esque obsession with breaking changes. Not a great mix, tbh. Funny how adding D to foo “breaks client code” (it breaks compilation), but not adding D and throwing it anyway, breaks somebody code at runtime, but that is ok.
Scalability also seems like a problem with something else. So a subsystem throws 4-10 checked exceptions, but why not one? Maybe ADT Subsystem1Errors. Oh, sorry, no ADT in Java. Let’s do subclass of subsystem1, catch some of there, and rethrow less. Can’t be done. Oh, well.
So how more programming languages will have to avoid them before people are convinced that checked exceptions are a bad idea?
So how many more programming languages, after PHP, Python, and JavaScript, have to become stunning successes before we convince ourselves that type systems are a bad idea? Oh, it is not 2003 anymore, and all of those have some form of type checking.
I don’t think whatever was half-baked into Java and C++ in the 90 should forever cast a shadow on that idea.
For one, checked exceptions are provided by libraries here, so no need to worry that standard library readfile
will annoy everybody forever.
The other thing is, in Haskell we do a lot of result types, especially in pure code, and many of those are checked exceptions in all but name.
the Coyoneda stuff is an implementation detail that end users don’t need to interact with.
That would be a refreshing change:
https://wiki.haskell.org/Monad_tutorials_timeline
I don’t think whatever was half-baked into Java and C++ in the 1990s should forever cast a shadow on that idea.
That looks suspiciously like another variant of the true Scotsman fallacy - “a properly-designed effect system/exception framework shall not cause any problems…”
write :: Connection -> Foo -> IO () write :: (e :> es) => Connection e -> Foo -> Eff es ()
I don’t understand how one
write
is modular and reusable, while the other isn’t.
The first one can be used everywhere. It has no dependencies. Its arguments are monomorphic. I can just insert it into any IO
computation and it will work without the need to change interfaces.
The second one is tightly coupled to the effect system. It’s impossible to convert to a pure IO
action and pass to a non-Eff
subsystem as it was specifically designed not to allow Connection e
to escape. It’s polymorphic, so any data structure or function that uses Connection e
becomes polymorphic despite no real polymorphism is involved. It’s only reusable within Eff
framework and unusable outside. Doesn’t look too modular.
Even more so, many of your points apply to
Bluefin
the same way as they apply to records of functions.
I re-read my points and can’t find which of them apply to a record of functions.
I think what makes Haskell unattractive to newbies is the lack of simple guidelines on how to structure applications.
Not sure about “how to structure applications” – it very much depends on the application. But some guidelines about good and bad practices are definitely possible. I said a similar thing before: Haskell lacks a body of industrial usage wisdom.
Meanwhile, I still see
mtl
recommendations, I guess for the lack of better “standard”.
And that’s what frustrates me. Effect systems are highly experimental, most of them are not production ready and/or have very serious flaws. Yet somehow they became a necessary pre-requisite for writing basic programs. They’re not.
If a new experimental thing appears, it’s probably better to advertise it as “look, what an interesting way to program” than “this is the future of Haskell”
I think the universal conclusion we can draw from all these cases is that it’s better to push IO, the allmother of exceptions, to the edge of your program, as far away as possible. Handle them however you like. Discretion is key.
Agreed. The more pure code and the less IO
the better.
If one wants better control, it should be a discreet decision, not a jump on a bandwagon.
Surprisingly, the most obvious option – providing a pure `IO’ interface – is missing. This would be the easiest for library users to integrate into whatever framework they’re using.
What’s the point of granularity means if all those finely-slided effects are going to be all mashed together again into an I/O action?
- An effect implementation doesn’t necessarily directly interface with
IO
at all - Limiting the surface area of where
IO
is used (e.g., only in the final unwrapping of an effect system) has its own benefits which are basically the arguments in favour of using pure functions where possible
Yes, everything must eventually be mashed together into an I/O action. But that’s all Haskell code - it must eventually get to main
. That doesn’t stop us from writing pure functions.
An effect implementation doesn’t necessarily directly interface with
IO
at all
That’s why I’ve been taking care to specify the effects as being “external” (as in externally-visible) - as mentioned elsewhere, “internal” effects can be confined runST
-style.
Limiting the surface area of where
IO
is used (e.g., only in the final unwrapping of an effect system) has its own benefits which are basically the arguments in favour of using pure functions where possible.
No.
For a time it was proclaimed that the solution to [chemical] pollution was dilution, which is false. Now for Haskell, a similar proclamation is being made - "the solution to the pollution of effects is the dilution of IO a
into individually-typed effects". But one effect seems to always be ignored - the effect on code.
Be it:
Eff [... {- "external" effects -} ...] a
- or regular
IO a
…if a change to some obscure definition deep in the program means that definition then relies on an “external” effect, then everything that directly or indirectly relies on that formerly-obscure definition (i.e. its reverse dependencies) that was ordinary effect-free Haskell code must also be changed. That only using the maximum “dilution” - one “external” effect - or potentially all of them is irrelevant: as (correctly) noted in Kleidukos’s presentation, avoiding nondeterminism means all effects must be used in a sequential context which can only be accessed in full via the monadic interface.
So using individual “external” effects is definitely not the same as just using ordinary effect-free Haskell definitions.
So can any unifying concept be abstracted from all of this?
The unifying concept is indeed the existence of IO. You can choose how you want to deal with it. Maybe you feel more at ease with a “no missile launches here” tag effect reminded to you by the type system. Maybe not. Thus, each library with its opinion.
Prior to version 1.3, Haskell went even further -
IO a
didn’t exist! So the entire program was just an ordinary effect-free function
Also, from the paper:
This request/response story is expressive enough that it was adopted as the main input/output model in the first version of Haskell, but it has several defects:
• It is hard to extend. New input or output facilities can be added only by extending the Request and Response types, and by changing the “wrapper” program. Ordinary
users are unlikely to be able to do this.
• There is no very close connection between a request and its corresponding response. It is extremely easy to write a program that gets one or more “out of step”.
• Even if the program remains in step, it is easy to accidentally evaluate the response stream too eagerly, and thereby block emitting a request until the response to that request has arrived – which it won’t.
The representation was error prone.
It certainly was! So when a way was found to do
it, dialogue-based I/O (which kept the management of “external” effects outside Haskell) was replaced with functor applicative arrow comonad monad-based I/O in the form of the abstract type IO a
(which brought the management of “external” effects inside Haskell).
But now, IO a
has been deemed as being semantically error-prone, in need of dilution into separate “external” effects. However there’s a problem here too - as noted in Kleidukos’s presentation, side effects are arbitrary (with “external” effects being externally-visible side effects). So how IO a
should be diluted is also arbitrary - there are no “unit effects” in the same way there are chemical elements…and now there are more effect systems for Haskell than the 92 naturally-occurring elements of chemistry.
To summarise:
“External” effects are managed… | Problem/s |
---|---|
outside Haskell (“wrapper” ) | Error-prone (finnicky!) |
inside Haskell (IO a ) |
Error-prone (semantically!) |
inside Haskell (effect system) | Error-prone (wrong choice!) |
…are there any other alternatives that can work for Haskell?
In my view the only true benefit to effect systems is that it’s the only sane way of getting dynamic dispatch in Haskell, so for anything production-grade you *could* have a mirror test system that behaves exactly the same, but calls test versions of all real-world things it links to. It’s still a remarkably hard thing to achieve however: you have to know how to structure your code and you have to avoid any type shenanigans because recursive type families are exponentially slow.
I sympathize with the idea that it would be nice to keep track of which effects are used in any given function, it’s a strongly typed language after all, but having to choose one of five effect libraries and then getting “rewarded” with both a bulkier codebase and performance overheads squarely puts this in the “only use at work on the high level” territory. If GHC had seamless native support for this and some way to precompile functions that only use one implementation at runtime, it would be a no-brainer.
I don’t see a natural benefit of using Reader
s over plain argument passing. Passing arguments indeed looks bulkier at the first glance, but it allows me to divide the context to the smallest necessary bits at every point. Reader
on the other hand necessitates using lens (or more recently field selectors), blurs the line between which arguments are actually needed in a given function, and does not look any prettier.
I don’t see a need in checked exceptions, I lean on the side of “if I expect something to fail without terminating the process, then it’s not an exception”. It’s a natural extension of trying to decouple everything pure from effects, as I can simply use datatypes to convey that an undesired (but not critical) condition has occurred instead of breaking the control flow. It’s also faster.
Regarding lack of modularity in effects libraries, that’s pretty much how all of Haskell’s ecosystem works: instead of providing the minimal tools and letting users stitch things together, it’s instead expected that you use one of fifteen convenient runner functions and for anything beyond that you have to dive into non-PVP-compliant internals. This won’t change unless the community agrees it’s undesirable, good luck with that.
There’s also these observations from 2003 :
Let’s start with versioning, because the issues are pretty easy to see there. Let’s say I create a method foo that declares it throws exceptions A, B, and C. In version two of foo, I want to add a bunch of features, and now foo might throw exception D. It is a breaking change for me to add D to the throws clause of that method, because existing caller of that method will almost certainly not handle that exception.
Adding a new exception to a throws clause in a new version breaks client code. […]
Regardless of whether you reflect the change (throwing a new type of exception) in the type signature or not, throwing a new type of exception is a breaking change (i.e. can break clients), and the users should be aware of it. It should cause a major version bump in your library even without checked exceptions.
Clients then need to consider what to do about the exception, and revisit the call sites.
My observation is that a lot of commentary on checked exceptions from 2000s and earlier (mostly in the context of C++ and Java) do not apply to today’s type systems, scale, and language features. Java did it poorly, and left people traumatized.
There will always be a use case for unchecked (and asynchronous) exceptions, like StackOverflow
, HeapOverflow
, and ThreadKilled
. It probably makes sense to give the users the ability to throw unchecked exceptions as well, and let them decide when it makes sense to have an exception checked vs. unchecked.
Checked exceptions can be modeled as effects, and I suspect with polymorphic variants they can be conveniently composed (I have some notes on this here). No one proved that they can’t be made convenient to use.
My observation is that a lot of commentary on checked exceptions from 2000s and earlier […] do not apply to today’s type systems, scale, and language features.
Then here’s the challenge for you and everyone else who thinks IO a
is now bunglesome and needs diluting:
type IO a = Eff All a
…use your preferred system of effects to provide a Haskell declaration for All
.
I see a lot of speculation here and people speaking past each-other. Why don’t you all go and build something with an effect system, and come back with actual production insights? The conversation would certainly be more productive.
Why don’t you all go and build something with an effect system, and come back with actual production insights?
I think I’ve heard this before…yes, that’s what it was:
Unfortunately, no one can be told what the Matrix is. You have to see it for yourself.
Morpheus
-
I see… a decades-old pop culture reference. Disappointing. By the way, Morpheus could have just said "The Matrix is a machine-made virtual world that your real body, currenly residing in a vat full of snot, has been plugged into and receiving stimuli from all your life."
Right. There’s no need to get overly mystical, when good explanations are all that more satisfactory.
…can we just wait until the GHC developers choose one, so it can be deemed as recommended workable?
The second one is tightly coupled to the effect system. It’s impossible to convert to a pure
IO
action and pass to a non-Eff
subsystem … It’s only reusable withinEff
framework and unusable outside. Doesn’t look too modular.
This is not true. There is no coupling to the effect system (assuming we’re talking about Bluefin). To see this, note that this function is safe:
tag :: Untagged.Connection -> Tagged.Connection e
so you can define the former write
in terms of the latter:
untaggedWrite :: Connection -> Foo -> IO ()
untaggedWrite conn foo = runEff $ \io ->
taggedWrite (tag conn) foo
I said a similar thing before: Haskell lacks a body of industrial usage wisdom .
Agreed with this. It would be good to establish one.
Effect systems are highly experimental, most of them are not production ready and/or have very serious flaws.
I disagree with this. Effect systems as a whole are an extremely well understood area of the Haskell world. They’re so well known, in fact, that their significant weaknesses, and potential approaches to ameliorate them, is extensively covered and recovered ground.
However, until the progression from ReaderT IO
to effectful
that I covered in my talk there had been no approach developed that resolved all the significant weaknesses. effectful
does address all the issues of existing effect systems (with one exception: it doesn’t directly support multi-shot continuations – that’s probably fine!) and we know it doesn’t have any additional weaknesses of its own relative to IO
because it is just IO
. Bluefin inherits these properties.
I suppose one could argue: “but the additional interface that effectful
and Bluefin put on IO
is too complex”. But firstly, IOE :> es => Eff es a
is (almost) just IO a
, so you’re never far from the lowest common denominator, and secondly, I don’t believe there is a simpler way to carve out effects from IO
. Can you think of one? If not then that’s evidence that carving out effects requires a certain level of additional complexity. If you don’t want that complexity then so be it, but that’s not the same as a proof that effectful
and Bluefin are experimental or flawed.
Yet somehow they became a necessary pre-requisite for writing basic programs. They’re not.
If a new experimental thing appears, it’s probably better to advertise it as “look, what an interesting way to program” than “this is the future of Haskell”
I don’t recall anyone saying effect systems were a pre-requisite for writing basic programs. Can you point out such a claim?
Regarding “this is the future of Haskell”, perhaps you’re referring to my slide “Bluefin is the future of Haskell!” at timestamp 40s of my talk. To be clear, that is not advertising. That is simply my belief. The point of the talk was to justify that belief based on properties of Bluefin. To summarise why that is my belief:
To justify effect systems per se: I believe it is useful in practical programs to “make invalid operations unrepresentable”, i.e. use types to tightly delimit what externally-visible effects a function can perform. This must include, at minimum, state, exceptions and I/O, and it must do so in a composable manner.
To justify IO
-based effect systems: it is essential for practical programming that an effect system provide resource safety and easy reasoning about behaviour. I don’t believe this is possible outside IO
-based effect systems.
To justify Bluefin (i.e. value level effect arguments) versus effectful
(i.e. type level effect arguments), I think it’s simpler and more approachable. (I wouldn’t really be surprised or disappointed if effectful
won out over Bluefin. It’s a matter of taste and I’m happy to let the market decide. But the Haskell ecosystem really does IO
-based effect systems to displace all others, and I think that’s inevitable.)
EDIT:
any data structure or function that uses
Connection e
becomes polymorphic despite no real polymorphism is involved. It’s only reusable withinEff
framework and unusable outside. Doesn’t look too modular.
Do you also feel the same way about ST
, which has the same “polymorphic” property?