Towards an actor framework for Haskell

As we all know and love, Haskell helps us to write correct code, especially in pure settings. Once the real world enters the picture, things get slightly murkier though. GHC Haskell allows for efficient IO, threading, and brings advanced ways to handle exceptions. When writing an application handling independent requests separately, things aren’t too bad. However, when there’s interaction between requests, shared state within the application, coordination of access to external services, etc., things become quite a bit more complex.

To handle this exceptional world, some languages/platforms/frameworks adopted a “let it crash” model, which IMHO is the easiest way to handle this real-world environment: instead of trying to intelligently handle everything that can go wrong, simply quit when some exceptional situation is detected and let some other entity pick up the pieces. The best-known example bringing this theory into practice is likely Erlang.

Over the years, several projects attempted to bring Erlang-style processes (sometimes called ‘actors’) to Haskell. Many are merely proof-of-concepts, or fail to capture the essence of Erlang’s model, e.g., not providing equivalents of link/unlink or monitor, or only supporting a FIFO mailbox without the ability to match on messages. The most “complete” implementation is likely found in distributed-process, an implementation of the paper “Towards Haskell in the Cloud” and successor of the remote library. distributed-process aims to bring cross-node remote capabilities (as does Erlang), including sending closures to remote nodes, and much more. However, despite some impressive engineering, IMHO it didn’t succeed in capturing the community’s interest sufficiently, and now seems somewhat undermaintained. Furthermore, it can be a tad complex to use in an application that’s not (internally) distributed across nodes.

So, I was wondering: would there be any community interest in a project that provides an implementation of the actor model, following the semantics and functionality Erlang (and, let’s not forget, OTP) brings, but restricted to single-process operations, i.e., no cross-process/cross-node features. Of course, the latter can be built on top of the process-local core functionality, as e.g., the Partisan library does, bypassing Erlang’s distributed capabilities.

If this is the case, would anyone be willing to actively contribute to such project? I’d love to work on this with a group of motivated developers, both newcomers to Haskell as well as veterans, instead of building it all by myself :musical_note: . On the Haskell side, there’s a ton to learn from distributed-process (its implementation of pattern matching on mailbox messages, for example, is very intriguing), and of course Erlang, its semantics, and the behaviours provided by OTP should be studied as well.

I started assembling some thoughts in a wiki page, though right now everything is open for debate, of course.

Would love to hear from you if you’d be enthousiastic about this project, and willing to collaborate!


Another point of conversation for you: #21578: [Discussion] Erlang-style processes in the RTS · Issues · Glasgow Haskell Compiler / GHC · GitLab

1 Like

I’d be delighted to see this project take off.

Imitation is the sincerest form of flattery, and I’d love us to take the best of Erlang and adopt it for Haskell. The paper Towards Haskell in the cloud makes a pretty concrete stab in that direction, but you are right to say that it never quite “caught fire”.

And yet, I’m certain that there is much un-realised potential there. In particular, Erlang’s failure model is its biggest contribution and I’m sure we could adopt it, or something very like it.

I can’t promise much bandwidth, but I’d be happy to help in any way I can.



My 2 cents and brainstorming here:

On top of what Cloud Haskell has proposed, it would make such could runtime especially compelling and competitive if:

  1. ProcessM could be dynamically bound to operations either over same machine or cross different machines, depending on the load detected by the runtime (Being elastic some commercial application may call so?)
  2. In a likely Wasm-ubiquitous future, the ProcessM being a interopable construct that other language can also build compatible ProcessM effects.

Edit: I apologize to the author that I didn’t read the first sentence in the link:

  • There’s no intent to work cross-node. This library allows to build actor-style applications within a single OS process.


That’s very interesting indeed, thanks for the pointer, subscribed! Indeed, Erlang’s use of processes as a GC domain allows for soft-real-time applications, and the RTS supporting something like this could definitely be leveraged by an actor “framework”. When reading the ticket, I immediately thought of compact regions for cross-domain communication, as @bgamari then brings up as well :slight_smile:

I do believe one does not preclude the other, as in, an actor “framework” doesn’t require per-process heaps, so I think having a well-designed developer-facing actor library could help in the design of the relevant RTS APIs when the time’s ripe, and such APIs could then be adopted (potentially with breaking developer-facing changes, depending on the design, of course).

Thanks for the support, Simon! I agree, the failure model is central to all the goodies Erlang-inspired software architectures bring, and I believe we already have all that’s needed to present this to application developers. With the very basics in place, higher-level constructs can then allow for more code/pattern re-use, similar how Erlang has its builtins, then the kernel package, then the stdlib (where gen_server lives), etc.

No worries about that last part :smiley: I envision this to be single-node only, at least at first, since IMHO that’s what’s needed in order to build what’s needed to run applications in a distributed fashion. Where distributed-process mixes the two, I believe one should be stacked on the other.

If at all. I’m not sure how many BEAM-based projects actually use distributed Erlang features, vs. Erlang (or Elixir or whatnot) applications running within a single node/process, and using more “standard” ways to communicate with other services running in other processes (locally or remotely) using, e.g., some HTTP APIs.

As to the WASM-based approach: the BEAM specifies a bunch of things, including serialization formats (ETF), which allows for BEAM-compatible libraries to be built for other platforms/languages, at which point one can integrate processes using said libraries in a distributed Erlang cluster. As an example, ergo is such library for Go, and back in the day there was TwOTP for Twisted Python (funny enough, first Google hit is a 2009 blogpost of mine :laughing:)

Now, I’m not sure that’s the right path forward. I’m not sure there’s a lot of value in such interoperability (could be wrong, of course), e.g., why one wouldn’t design things such that interactions between some Erlang service and some other service happen through more traditional ways, and furthermore, whether this interop may limit what’s provided by this project.

But of course, there’s room for experimentation!

1 Like

There’s some data we can certainly get from our friends at the Erlang Ecosystem Foundation. If you have more specific questions I’d be happy to relay them.

I absolutely loved distributed-process for its actors/inboxes model and happy to see another attempt at this.

Never got to use it for remote calls, but it was a life- and sanity-saver when juggling multiple coordinated processes.

1 Like

Hi @NicolasT, I am very enthusiastic about this project and I would love to have a common actor framework to build on.

Here are some notes about your design ideas:

  • Integrating MonadLogger or MonadLog sounds a bit too opinionated, is this necessary?
  • What’s the reason for using STM and NFData for messages passing? Lazy message may be useful too, and transaction could be made optional.
  • How to prevent threads from leaking with async. Should we consider ki to solve this issue? Are the actors supposed to be single threaded?
  • I am particularly interested in having system metrics, but does it have to be in the Prometheus format? What about reduction count (e.g. cpu usage), could this be reported too?

I wonder if the messaging could be decoupled from the process management. It seems like handling the thread hierarchy and supervision could be useful on its own.

Thank you for starting this discussion, I am looking forward its development.


Thanks for the feedback! Would you mind if I create some Discussions in the GitHub project? I think it’s better to discuss these (design) topics in the repo rather than here :smiley:

I went ahead and created

1 Like

I highly appreciate this initiative. At my company we use Scala with Akka Cluster, and a big benefit i see is that Actors scale really well, also conceptually. Whether you work on a small project within a single process or transition to a large, highly-distributed project, the communication api/code can stay the same. This is in stark contrast to other approaches like transitioning from a regular monotlith to microservices, where everything gets way more complicated.


That is great, thank you for taking the time to develop these points. It looks like you have a thorough design :). Perhaps it would be good to compare troupe with distributed-process, beyond the focus on single-node.

Also may I suggest to include some motivating examples, such as a database and web service actors. I think that would help to demonstrate the use-case.


this sounds like a fun adventure, happy to help out if i can, i’ve had some fun work projects in this application domain before

1 Like

My primary use-case for now is GitHub - NicolasT/panagia: An island close to Paxos.


Is it a paxos implementation?

1 Like

I should write about the exact goals of the project :wink: Give me some time…


I am sure you are aware, but there are emerging projects like gleam in the beam world. There was also caramel. I don’t know if compiling to beam interests you.

(Also, as a former akka and Haskell production dev, and current elixir/elm dev, I would personally focus on building out ihp or servant ecosystems further for boring crud apps and then let that naturally drive people calling for a beam like framework for the minority that want to reach for such scalability or availability, after building out a basic web application that gets millions of users)

1 Like

When we were using akka, it sometimes felt like using actors instead of futures led to very ugly spaghetti code which took tracing through many messages to understand a given business workflow.