What is a higher-order effect?

Via a comment of @arybczak, I came to this example of an effectful higher-order effect:

Preamble
{-# LANGUAGE GHC2021 #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE TypeFamilies #-}

import Effectful (Dispatch (Dynamic), DispatchOf, Effect, liftIO, runEff)
import Effectful.Dispatch.Dynamic (interpret, send)
import Effectful.Reader.Dynamic (ask, local, runReader)

data SomeEff :: Effect where
  SomeAction :: SomeEff m ()

type instance DispatchOf SomeEff = Dynamic
example :: IO ()
example =
  runEff $
    runReader "global" $ do
      interpret (\_ SomeAction -> (liftIO . putStrLn) =<< ask) $ do
        local (const "local") $ send SomeAction

The output is

ghci> example
local

This is astonishing to me! The interpretation function, despite being defined where the reader state is "global" returns "local" when it is called within the context of the local. Firstly, it’s astonishing to me that effectful can achieve this. Perhaps it explains the significant additional complexity of its Eff monad compared to Bluefin’s (which is just a newtype around IO). Secondly, it’s astonishing to me that people want this behaviour. This is a form of dynamic scoping, something that functional languages have aimed to eschew.

Consequently, I have a couple of questions

  1. Is this dynamic scoping behaviour part of the definition of “higher order effect”? If so then that’s significant for Bluefin: I don’t think Bluefin can ever support this kind of behaviour. EDIT: Bluefin can support this kind of behaviour.
  2. What do people use this dynamic scoping behaviour for? I can’t think of a situation where I wished I had that. EDIT: @ocharles motivates this behaviour very well.
4 Likes

What is a higher-order effect?

According to Birthe van den Berg and Tom Schrijvers, a higher-order effect is an effect that is able to access an internal computation:

So that interpretation function (and its behaviour) seems to be an example of a scoped effect.

1 Like

I think it’s just a fancy name for an effect that can be used in a callback. A function that takes a callback function as an argument is a higher-order function, so processing effects within the callback is considered a “higher-order” effect. It’s the sort of thing that’s handled by MonadUnliftIO, for example.

1 Like

To me local “makes sense”, though maybe I’m stuck in stockholm. My reasoning:

  1. The monomorphic version of local is merely modifying an ordinary (locally scoped) function parameter, so no problems here:

    local :: (r -> r) -> ReaderT r m a -> ReaderT r m a
    local f (ReaderT onEnv) = ReaderT $ \env -> onEnv (f env)
    
  2. Similarly, the polymorphic mtl version is pretty routine since it’s just dictionary passing:

    local :: MonadReader r m => (r -> r) -> m a -> m a
    

So my vague impression is that the semantics for effectful, etc. should be pretty sensible too, since they’re morally equivalent to typeclasses, right?

I don’t use local often, but I have used it for namespacing logs. For instance:

addNamespace :: MonadReader [String] m => String -> m a -> m a
addNamespace ns = local (++ [ns])

foo :: MonadReader [String] m => m ()
foo = addNamespace "foo" $ do
  log "something"
  addNamespace "doThing" $ log "more logs"

-- [foo]: something
-- [foo.doThing]: more logs

IMO the dynamic dispatch is muddying the waters with local specifically, since I’m not sure why you’d ever want different behavior. In other words, this static version in bluefin seems useful and sensible:

local ::
  forall r e es a.
  (e :> es) =>
  Reader r e ->
  (r -> r) ->
  (forall e1. Reader r e1 -> Eff (e1 :& es) a) ->
  Eff es a
local (MkReader ns) f = runReader (f ns)

addNamespace ::
  forall e es a.
  (e :> es) =>
  Reader [String] e ->
  String ->
  (forall e1. Reader [String] e1 -> Eff (e1 :& es) a) ->
  Eff es a
addNamespace r ns = local r (++ [ns])

foo ::
  forall e1 e2 es.
  ( e1 :> es,
    e2 :> es
  ) =>
  IOE e1 ->
  Reader [String] e2 ->
  Eff es ()
foo io rdr = addNamespace rdr "foo" $ \rdr2 -> do
  log io rdr2 "something"
  addNamespace rdr2 "doThing" $ \rdr3 -> log io rdr3 "more logs"

I implemented it here.

That said, effectful and friends do let you supply genuine dynamic behavior for Local and Ask. I don’t see why this would be fundamentally impossible with bluefin (fleeting attempt here), but I’ve yet to figure it out.

1 Like

I have implemented Bluefin wrapper for log-base and this is what I basically did.

What I don’t like about it is that now I have multiple loggers in scope and not much stopping me from using the wrong one.

2 Likes

Nice, glad to see some convergence.

What I don’t like about it is that now I have multiple loggers in scope and not much stopping me from using the wrong one.

Yeah agreed, I was hoping there was a better way. At first I made that exact mistake (referenced the wrong Reader), effectively dropping a local change.

1 Like

I don’t see why this would be fundamentally impossible with bluefin (fleeting attempt here), but I’ve yet to figure it out.

Thanks. I finished the implementation and submitted a PR. It’s quite mind-bending to define runReader in terms of itself, but it seems to work out!

I’m not convinced this gives the same sort of dynamic behaviour as effectful though. Your version is dynamic in the reader implementation. The effectful code has dynamic scoping of the effect itself.

Yeah, this is a bit of a pain. I can see two possible mitigations

  1. You can wrap Reader r e -> Eff (e :& es) a up in a monad again (like EffReader) but that defeats part of the point of explicit handles.
  2. You can shadow, e.g. local f r $ \r -> ....
  1. Just a shower thought, no idea if this is possible, but if we could speak about hiding/removing e from es, then this could help?
addNamespace ::
  forall e es a.
  (e :> es) =>
  Reader [String] e ->
  String ->
  (forall e1. Reader [String] e1 -> Eff (e1 :& es `MINUS` e) a) ->
  Eff es a
addNamespace r ns = local r (++ [ns]

Yeah, removing effects is exactly what we’d want to do, but I don’t think it’s possible (short of linear types).

I also found this behaviour surprising when I first came across it. The metaphor that worked for me is that you should consider effectful's interpretation functions to work as if they were actually substituted in for the effect calls. So the original example is actually equivalent to:

example :: IO ()
example =
  runEff $
    runReader "global" $ do
      local (const "local") $ (liftIO . putStrLn) =<< ask

which is non-mysterious.

Also note that this is exactly what you’d get if you did something similar in MTL:

class MonadSomeAction m where
  someAction :: m ()

newtype MyMonad a = MyMonad (ReaderT String IO a) deriving newtype (Functor, Applicative, Monad, MonadIO, MonadReader String)

instance (MonadIO MyMonad, MonadReader String MyMonad) => MonadSomeAction MyMonad where
  someAction = (liftIO . putStrLn) =<< ask)
ghci> flip runReaderT "global" $ coerce @(MyMonad ()) @(ReaderT String IO ())  $ local (const "local") $ someAction @MyMonad
local

As for whether this is useful, yes, it is pretty useful. Pretty much any time you want to access the reader context, you do actually want the version at the call site of the effect call. As another random example, we use this to get the optentelemetry trace context so we can add spans for what happens in the effect handler, and you definitely want this from the call site.

1 Like

To be clear, I am only surprised by dynamic scoping of interpretations. I am not surprised by dynamic scoping of function calls, because the Reader context is an implicit argument, for example in example2 below. Naturally, calling a function in Haskell acts as if the body of the function were substituted at the call site. Indeed you can implement that naturally in Bluefin (the Reader would need to be an argument to f). When it comes to interpretations, why do I want the context for the interpretation to be determined by its call site rather than its definition site? That’s not yet clear to me.

Pretty much any time you want to access the reader context, you do actually want the version at the call site of the effect call. As another random example, we use this to get the optentelemetry trace context so we can add spans for what happens in the effect handler, and you definitely want this from the call site.

This seems a very fine and good thing to want, but it’s not clear to me why effectful's dynamic effect scoping semantics are necessary for that. @tbidne’s example of local and addNamespace seem to achieve the same, without dynamic effect scoping.

I agree it’s the same, but your usage of someAction seems closer to my example2 below than my example1.

Preamble
{-# LANGUAGE GHC2021 #-}
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE TypeFamilies #-}

import Effectful
  ( Dispatch (Dynamic),
    DispatchOf,
    Eff,
    Effect,
    IOE,
    liftIO,
    runEff,
    (:>),
  )
import Effectful.Dispatch.Dynamic
  ( EffectHandler,
    LocalEnv,
    interpret,
    send,
  )
import Effectful.Reader.Dynamic (Reader, ask, local, runReader)

data SomeEff :: Effect where
  SomeAction :: SomeEff m ()

type instance DispatchOf SomeEff = Dynamic
-- > example1
-- local
example1 :: IO ()
example1 =
  runEff $
    runReader "global" $ do
      let f ::
            (Reader String :> es, IOE :> es) =>
            SomeEff (Eff localEs) a ->
            Eff es a
          f = \SomeAction -> (liftIO . putStrLn) =<< ask
      interpret (const f) $ do
        local (const "local") $ send SomeAction

-- ghci> example2
-- local
example2 :: IO ()
example2 =
  runEff $
    runReader "global" $ do
      let f ::
            (Reader String :> es, IOE :> es) =>
            SomeEff (Eff localEs) a ->
            Eff es a
          f = \SomeAction -> (liftIO . putStrLn) =<< ask
      local (const "local") $ f SomeAction

Couldn’t you implement the same logic with a static reader effect? That is, instead of implementing a new dynamic effect SomeAction, why not define someAction = ask >>= SomeActionWithCtxt, then interpret the static effect SomeActionWithCtxt r without running ask? So I wouldn’t say that dynamic effects are absolutely mission-critical for your use case.

Not sure if that’s helpful or not, but the “substitution metaphor” sounds a bit like the difference between effectful by-name evaluation (dynamic) vs. effectful by-value evaluation (static). In terms of this thread, static effects seem to be “squeezed dry” at the interpretation site before substitution, whereas the interpretations of dynamic effects are substituted wholesale into use sites.

1 Like

That’s how it seems to me, but I welcome more detailed examples.

I agree that is a good way of thinking about it. I’d still really like to know whether the definition of “higher order effects” includes that they must be dynamically resolved, or whether the static/dynamic distinction is a separate axis.

1 Like

GitHub - sayo-hs/heftia: higher-order algebraic effects done right for Haskell (thanks for the link :)) seems to consider the dynamic/static distinction separate from higher-order effects.

GitHub - sayo-hs/heftia: higher-order algebraic effects done right for Haskell lists MonadUnliftIO, in particular bracket, catch, and others as examples of higher-order effects. I think a higher-order effect may simply be an effect operation that takes effectful computations as argument.

Reading the abstract of https://dl.acm.org/doi/pdf/10.1145/3571255, it appears that when a higher-order effect composes two separately compiled bits of code, these bits of code need to dynamically interact with each other in some way.

I understand this as follows. For example (roughly, haven’t type-checked)

withFile fileName f = bracket (openFile fileName) (closeFile fileName) f  

code withHandle n = catch (withHandle $ \h -> if n > 42 then throwIO ... else return 3) (\_ -> return 4)

main = code (withFile "/tmp/blah") 2300 >>= print

Note that withFile and code can be compiled in separate modules, and code throws an exception type that cannot be caught anywhere else. Then in code, it appears we know exactly what will happen when we call throwIO: We will immediately call the exception handler and return 4.

But now consider what happens in main, where we combine code with withFile. Suddenly, throwIO does not immediately call the exception handler; rather it interacts with the use of bracket in withFile and closes the file handle first. This is how I understand the “higher-order effects break modular reasoning” point of the paper.


Page 3 of the “Hefty Algebras” paper is quite interesting; it defines algebraicity of an effect operation (a notion that seems quite related to “squeezing dry” its effectful computation arguments) and remarks that local is a non-algebraic operation. I’m tempted to think that “algebraic” means “static coincides with dynamic dispatch”.

2 Likes

I’d been hoping to write this post since you gave your talk, but unfortunately I wasn’t able to find the time. A lot of discourse has happened since, and it’s been very interesting! One of my questions in the talk was about local, and there’s been a lot of fascinating points in this thread about it. Something I wanted to raise was a concrete example of where local is practically useful.

Perhaps the biggest use of it in CircuitHub’s codebase has been for tracing instrumentation, and it’s been so useful I’d consider this functionality indispensable. So without further ado, let’s start looking at some code. We are effectful users, so this example is going to use effectful. First, let’s look at the API of the Trace effect.

data Trace :: Effect

inSpan :: (HasCallStack, Trace :> es) => Text -> Eff es a -> Eff es a

Essentially the Trace effect lets us scope a block of code as belonging to an OpenTelemetry tracing span. Clients can use this to instrument there code, for example

doWork :: Trace :> es => Eff es ()
doWork = inSpan "doWork" do
  ...

But another use of this is to use it from within another effect handler:

data Database :: Effect where
  RunQuery :: Query -> Database m QueryResults

runDatabase :: (IOE :> es, Trace :> es) => Eff (Database : es) a -> Eff es a
runDatabase = interpret_ \case
  RunQuery q ->
    inSpan "runQuery" do
       liftIO (performQueryIO q)

This is already quite cool, and let’s us do things like

doWork :: Trace :> es => Eff es ()
doWork = inSpan "doWork" do
  jobs <- runQuery fetchJobsQuery
  ...

We can run this program with something like:

main :: IO ()
main = runEff $ runTrace $ runDatabase $ doWork

The result of this in now when we call doWork we create a new tracing span "doWork" and then within this span we get a child span "runQuery":

  • doWork
    • runQuery

But when we think about it, this is quite remarkable action at a distance! The Database interpreter knows nothing about doWork creating spans, but somehow everything fits together perfectly.

Even more fascinating is this works even when we split things apart even more:

doWork :: Trace :> es => Eff es ()
doWork = inSpan "doWork" do
  jobs <- getJobs
  ...

getJobs :: Database :> es => Eff es Jobs -- No `Trace` here at all!
getJobs = runQuery fetchJobsQuery

I skipped the implementation of Trace and inSpanat the start, but they work in exactly the same way as Reader and local! The Trace effect is basically a reader effect that has access to the current tracing span (if any), and whenever we call inSpan we lookup the current span and create a new span that is a child of the current span (ala local). We then use local to run the associated block of code with the current span changed to this new parent.

This is really what I wanted to know if bluefin was capable of expressing. It’s this interaction between effects that I find incredibly powerful, and it work out very naturally in effectful. As @michaelpj said, we did have some confusion understanding this at first - we thought that the runQuery method must have had to have the Trace effect itself for this to work - that is, we thought we’d have to have:

data Database :: Effect where
  RunQuery :: Trace :> es => Query -> Database (Eff es) QueryResults

But the real beauty is we don’t have to do this at all - tracing in the interpretation of the Database effect is entirely hidden from users, but if users themselves use tracing it also perfectly interacts with it!

But we do even more with this Reader- like behavior. Another thing that we’ve found really powerful is the ability to capture the current tracing span and then restore it later. This one is a little harder to motivate, but here’s one fairly self-contained example. We have a machine learning inference step. Running the inference is relatively quick, but we also want to archive the inference so we can review it at a later date. This archiving should have tracing spans, and we want to associate this with the inference step, but we don’t actually want to block on archiving - it should happen in the background. Our solution is to have a separate archiving thread that consumes a queue of (InfereneceResult, TracingContext). Whenever we archive an inference, we first reset our context to the given context:

runDetector = reinterpret withSavedInferences \_ -> \case
  Detect image -> do
    inSpan "detect" do
      found <- runInference image

      inferenceResults <- ask @(TQueue PartDetectionInference)
      sourceContext <- getContext
      atomically $
        writeTQueue
          inferenceResults
          Inference
            { sourceImage = image
            , sourceContext
            }

      pure found
  where
    withSavedInferences m = Ki.scoped \ki -> do
      inferenceResults <- atomically $ newTQueue @Inference

      _ <- fork ki do
        forever $ tryAny do
          saveInference =<< atomically (readTQueue inferenceResults)

      runReader inferenceResults m

    saveInference inference = withContext inference.sourceContext $ inSpan "saveInference" do
      ...

Again, if saveInference calls anything that uses Tracing - even if it’s within an effect interpreter like we saw with Database - everything gets associated as expected.

Hopefully that’s given an insight into why, from a practical point of view, one might want this local functionality! What I really like is that I don’t really have to think about how any of this works. I don’t have to think about state or mutation - I just think about how Reader works.

I look forward to hearing if bluefin can do any of this! If this isn’t clear enough, I’m happy to try and put a self-contained effectful example together.

9 Likes

Thanks @ocharles, this was very enlightening! Having thought about this I now understand a lot more about many things.

The short response is: yes, I see why these “dynamic” effects are useful, and yes, Bluefin can do them. See the code below, and the comment about main for the sample trace output.


To elaborate on the utility of dynamic effects, the point is that we want to be able to do this

doWork :: Trace :> es => Eff es ()
doWork = inSpan "doWork" do
  jobs <- getJobs
  ...

getJobs :: Database :> es => Eff es Jobs -- No `Trace` here at all!
getJobs = runQuery fetchJobsQuery

that is, define getJobs without reference to Trace, yet define the handler to create a new span, like

runDatabase :: (IOE :> es, Trace :> es) => Eff (Database : es) a -> Eff es a
runDatabase = interpret_ \case
  RunQuery q ->
    inSpan "runQuery" do
       liftIO (performQueryIO q)

and have the "runQuery" span nest beneath the "doWork" span. (You’ll see the Bluefin implementation achieves this.) This particular example was what made it all click into place for me, so thank you very much!


To elaborate on the implementation in Bluefin, what’s going on is that the reader effect is actually a mutable state (local mutates it on enter and restores it on exit), but given that local is the only thing that mutates the state we have the nice property that these two are always the same:

do
  r1 <- ask
  r2 <- ask
  pure (r1, r2)

do
  r1 <- ask
  pure (r1, r1)

So, in the Bluefin version below I implement Reader using State (i.e. mutable state). This is actually what effectful does too! local from Effectful.Reader.Static uses localStaticRep which modifies Eff's Env in a bracket (actually inlineBracket) that restores it afterwards. That’s the same as my Bluefin version.

This implementation scheme is what @arybczak suggested, with the caveat that “what if you then add forkIO to the equation”? Well, that would be bad. The IORef underlying the State would be accessed by multiple threads and it would go wrong. That’s already a problem for Bluefin though. One mitigation would be to take an effectful-style approach and track all mutable states in Eff. Then they can be cloned when we enter new threads. I don’t know if I’ll do that or just detect when withRunInIO is being used to launch a new thread and abort at run time (like effectful does with the SeqUnlift strategy). That must be pretty inefficient with a lot of states in scope, so I think I’d probably rather just come up with safe native Bluefin concurrency patterns, if that’s practical. We shall see.

EDIT: By the way, this shows that my intuition about not understanding the use case of this feature, and my intuition about Bluefin not being able to support it, were both completely wrong. I got bamboozled because I didn’t understand the importance of it being implemented by mutation under the hood.

Preamble
{-# LANGUAGE GHC2021 #-}
{-# LANGUAGE DerivingStrategies #-}
{-# OPTIONS_GHC -Wall #-}

import Bluefin.Compound
  ( Handle,
    makeOp,
    mapHandle,
    useImplIn,
    useImplUnder,
  )
import Bluefin.Eff
  ( Eff,
    bracket,
    runEff,
    (:&),
    (:>),
  )
import Bluefin.IO (IOE, effIO)
import Bluefin.State (State, evalState, get, put)
import Bluefin.Stream
  ( Stream,
    forEach,
    yield,
  )
import Data.List ()
import Prelude hiding (span)

newtype Reader r e = MkReader (State r e)
  deriving newtype (Handle)

runReader ::
  r ->
  (forall e. Reader r e -> Eff (e :& es) a) ->
  Eff es a
runReader r k = evalState r (k . MkReader)

ask :: (e :> es) => Reader r e -> Eff es r
ask (MkReader st) = get st

local ::
  (e1 :> es) =>
  Reader r e1 ->
  (r -> r) ->
  Eff es a ->
  Eff es a
local (MkReader st) f k = do
  orig <- get st
  bracket
    (put st (f orig))
    (\() -> put st orig)
    (\() -> k)

data Query r = MkQuery [r]

type Job = String

fetchJobsQuery :: Query Job
fetchJobsQuery = MkQuery ["Job 1", "Job 2", "Job 3"]

performQueryIO :: Query a -> IO [a]
performQueryIO (MkQuery as) = pure as
-- > main
-- doWork
--   runQuery
--     A trace inside runQuery
--   The jobs were
--     ["Job 1","Job 2","Job 3"]
main :: IO ()
main = runEff $ \io -> do
  runTrace io $ \tr -> do
    runDatabase tr io $ \db -> do
      doWork tr db

data Database es = MkDatabase
  { runQueryImpl ::
      forall e a.
      Query a ->
      Eff (e :& es) [a]
  }

instance Handle Database where
  mapHandle db =
    MkDatabase
      { runQueryImpl = \q ->
          useImplUnder (runQueryImpl db q)
      }

runDatabase ::
  (e1 :> es, e2 :> es) =>
  Trace e1 ->
  IOE e2 ->
  (forall e. Database e -> Eff (e :& es) a) ->
  Eff es a
runDatabase tr io k =
  useImplIn
    k
    MkDatabase
      { runQueryImpl =
          \q -> do
            inSpan tr "runQuery" $ do
              trace tr "A trace inside runQuery"
              effIO io (performQueryIO q)
      }

runQuery :: (e :> es) => Database e -> Query a -> Eff es [a]
runQuery db q = makeOp (runQueryImpl (mapHandle db) q)

data Trace e
  = MkTrace (Reader Int e) (Stream String e)

runTrace ::
  (e1 :> es) =>
  IOE e1 ->
  (forall e. Trace e -> Eff (e :& es) r) ->
  Eff es r
runTrace io k = runReader 0 $ \re -> do
  forEach
    ( \stream -> do
        useImplIn k (MkTrace (mapHandle re) (mapHandle stream))
    )
    (\line -> effIO io (putStrLn line))

trace :: (e :> es) => Trace e -> String -> Eff es ()
trace (MkTrace re stream) msg = do
  n <- ask re
  yield stream (replicate (2 * n) ' ' <> msg)

inSpan ::
  (e1 :> es) =>
  Trace e1 ->
  String ->
  Eff es a ->
  Eff es a
inSpan tr@(MkTrace re _) span k = do
  trace tr span
  local re (+ 1) k

doWork ::
  (e1 :> es, e2 :> es) =>
  Trace e1 ->
  Database e2 ->
  Eff es ()
doWork tr db = inSpan tr "doWork" $ do
  jobs <- getJobs db
  inSpan tr "The jobs were" $ do
    trace tr (show jobs)

-- No `Trace` here at all!
getJobs :: (e :> es) => Database e -> Eff es [Job]
getJobs db = runQuery db fetchJobsQuery
4 Likes

From a definitional standpoint, a higher-order effect is simply an effect that includes higher-order operations. An effect is a set of operations, and higher-order operations are those that include actions (such as Eff es or m) in their argument positions.

I don’t think there’s any inconsistency in how people use these terms. At the very least, I don’t believe that dynamic scoping behavior is part of the definition of higher-order effects. The definition of higher-order effects is formal and symbolic regarding their data structures and is independent of their behavior, semantics, or functionality—that is, independent of the question of “what can be achieved with them.”

I consider them to be on separate axes. I imagine that there’s a broad framework called higher-order effects, within which countless semantic possibilities exist. Among those, we’re searching for user-friendly and practical semantics that are useful to users.


The functionality of higher-order effects lies in being able to handle higher-order operations (like local or inSpan) within the same framework as first-order operations (like ask or get). In other words, it allows us to uniformly handle both first-order and higher-order operations as first-class effects.

For example, in effectful, thanks to higher-order effects, the interpose function—as shown in this post—can be used not only for first-order operations like ask but also for higher-order operations like inSpan. (The following is pseudocode):

logOnSpan :: (Trace :> es, Log :> es) => Eff es a -> Eff es a
logOnSpan = interpose \(InSpan spanName action) -> do
    log $ "[logOnSpan] Start " <> spanName
    inSpan spanName action
    log $ "[logOnSpan] End " <> spanName

main = runTrace $ runLog $ do
  inSpan "scope1" do
    logOnSpan do
      inSpan "scope2" do
        putStrLn "hello"

{-
> main
[logOnSpan] Start scope2
hello
[logOnSpan] End scope2
-}

I suspect that currently, bluefin's framework for implementing effects is quite different from existing ones, and It might be facing difficulties in neatly finding counterparts to higher-order effects within bluefin. If we were to say that “bluefin supports higher-order effects,” I think that would mean—as mentioned in this discussion—that not only the behavior of ask but also that of local can be modified afterward by changing the passing value-level local effect.

Initially, I thought that bluefin was propagating evidence (effect handlers) via arguments instead of storing the evidence in a vector, as is done in so-called evidence passing. In this method, for example, the Reader effect type—including local—would be defined as follows:

data Reader r es = Reader
  { ask   :: Eff es r
  , local :: forall a. (r -> r) -> Eff es a -> Eff es a
  }

However, when I looked into how custom effects are implemented using the Compound module, it seemed different from what I had imagined. Perhaps passing effect handlers, rather than using IORef, enables more functionality. But since I’m not very familiar with bluefin's mechanisms, please correct me if I’m mistaken.

I learned quite a lot about extensible effects in the past few days and the puzzle pieces are slowly coming together. Let me summarise.

Dynamic vs. Static effects

I think part of the confusion is this: When people talk about effect systems, it is unclear whether they mean a type system that tracks effects or one which allows to extend the language with custom effects, the implementation of which is determined by a (usually lexically-enclosing) handler. These languages often implement effects and handlers by translating to delimited control operators with special support in the runtime. In particular, delimited control operators allow for custom control effects that may choose to discard the continuation and thus implement stack unwinding.

Haskell libraries such as effectful provide the capability to define such extensible effects and their corresponding handlers. I think an extensible effect is what effectful calls a “dynamically-dispatched” effect, because an effectful program is given meaning dynamically depending on the installed effect handlers.
Edit: However, I think effectful does not support extensible control effects, only effects that do not need delimited continuations.

It is simple to implement Reader, State, Exception, … effects wrt. delimited continuations (which is neither what effectful nor bluefin does).
NB: Effects are actually (uses of) classes, not instances, so the mtl equivalent would be MonadReader, MonadState, MonadThrow. Effectful programs have their semantics fixed by providing them with the corresponding effect handlers (instances at ReaderT, StateT, …).
It is a reality that GHC’s specialiser gives up at some point. If effectful programs cannot be specialised for their handlers, they will be quite inefficient! Yet still much much more efficient than the alluded unspecialised mtl-style encoding, where even (>>=) cannot be specialised.
I think this was the reason that @lexi.lambda wanted to introduce delimited control ops: So that inevitably unspecialised/dynamically-dispatched control-effectful programs still have decent performance in the absence of any custom effect.

Yet, the unspecialised, dynamic effect handler implementation of effects such as Reader, State and Exception in terms of delimited continuations is pretty inefficient, because any operation will need to copy a fragment of the stack, as required by delimited control operators and quite unnecessary for something as simple as ask. Therefore, effectful eschews control operators and furthermore came up with the notion of statically-dispatched effects, which is basically a manual specialisation of effects for a particular effect handler.
These statically-dispatched effects are not implemented in terms of inefficient delimited control ops nor dynamic handles, but in terms of specialised implementations. See also the next section.

As I understand it, bluefin takes the pragmatic stance that people are unlikely to truly need extensible/dynamic effects (question (2) in the OP is exactly about that), and thus only provides static implementations.Case in point: there is not a single occurrence of prompt#/control0# in its repo. It provides means to write custom Compound effects for building larger effects out of the builtin ones, though. Great ergonomics and usefulness of its API aside, I would say it is an effect-tracking library, but not an extensible effects library. effectful is in the latter category.
Edit: This statement was misguided; it appears that bluefin simply does not support extensible control effects and hence does not use delimited continuations, but that’s just like effectful.
bluefin-algae, on the other hand, adds control effects to bluefin as well by providing a wrapper around the delimited control primops.

Specialising for known effect handlers implemented using delimited control

To get a feeling for what is inefficient about the dynamic delimited control implementation: The usual effect handlers for Reader, State, etc., that correspond to mtl instances for ReaderT and StateT, etc., are tail-resumptive. This means their effect handlers do not actually modify the stack.
For example, ask (MkReader tag ref) = control0 tag (\k -> k (readIORef ref)) is equivalent to readIORef ref, and that is at least its static implementation in bluefin.
In general, implementing tail-resumptive handlers in terms of control0# tag (\k -> k e) (which copies the stack into the heap just to immediately resume it) is absolute overkill! Hence I propose to add a rewrite rule to GHC here: #25490: Delimited Continuations: Optimise tail-resumptive operations · Issues · Glasgow Haskell Compiler / GHC · GitLab.
Likewise, Exceptions never resume, meaning they unconditionally discard the continuation k (control# tag (\_k -> handle exn), NB: _k dead), so it is absolute overkill to copy the stack as well.
Hence I propose to add a new primop to GHC that simply aborts the prompt and does not copy the stack at all (pretty much like raiseIO#), plus a corresponding rewrite rule: #25489: Delimited Continuations: Implement `abort#` to avoid copying the continuation · Issues · Glasgow Haskell Compiler / GHC · GitLab.

Net result: Reader, State and Exception can be implemented dynamically just as efficiently as statically. Now, I would like to see GHC improve such that any known, dynamic implementation is as efficient as its static counterpart. This would ensure that dynamic effects implemented by users can be just as efficient as the builtin static ones. Fixing the issues linked above should go some way towards that. However, I can see two problems:

  1. I’m a bit skeptical about custom Exception, because prompt/control0 is not expressive enough to express async-exception safe bracket.
  2. Imagine a computation using a Reader effect, compiled separately from the vanilla Reader effect handler. In order to get good code for the effectful computation, we would need to specialise it for the effect handler/capability. That is likely to fail at some point when the computation becomes too large. So it seems the effectful design where we can choose between dynamic and static dispatch seems quite important in practice, because AFAIU static dispatch ensures that we do not need to do a costly control0#.

That said, I have mixed feelings about control0#. Since it is not zero-cost and I almost never need the full generality that it offers, I would try to avoid it at all cost, which is impossible when specialisation fails. If GHC had segmented stacks like OCaml seems to have, we could make it near zero-cost, perhaps so much that it doesn’t matter much when specialisation fails. Perhaps that would allow effectful and bluefin to offer extensible control effects as well.

Higher-order effects

Higher-order effects are effect ops that take effectful computations as arguments, for example local.
It is a bit challenging to support extensible (dynamically-dispatched) higher-order effect ops.
NB: local is supposed to be an effect op and not a derived implementation in terms of ask.
In effectful, I think you would write

  data Reader r :: Effect where
    Ask  :: Reader r m r
    Local :: (r -> r) -> m a -> Reader r m a

and then have a handler (roughly, not type-checked.)

runReaderVanilla r m = newIORef r >>= \ref -> interpret_ go m 
  where
    go Ask = liftIO (readIORef ref)
    go (Local f m) = do
      r <- liftIO $ readIORef ref
      liftIO $ writeIORef ref (f r)
      a <- m
      liftIO $ writeIORef ref r
      return a

(I think I must actually use interpret instead and do something like a <- localSeqUnlift env $ \unlift -> unlift m instead of a <- m. Bother.)

This is just one effect handler for Reader. Others (i.e. logging ones) are conceivable.
Note how the encoding of Local as part of the Reader data declaration is not entirely trivial, and how I should have actually used interpret instead of interpret_…

If your effect is statically-bound, there is just one effect handler. Any occurrence of local can be specialised for that effect handler and you do not need to think long and hard about how to encode higher-order effects or what they mean.
This also means you cannot monkey-patch the definition of local after the fact to support logging.
(But as Tom has demonstrated, there are often different ways to introduce seams that achieve logging etc.)

My personal summary regarding bluefin: since the encoding of higher-order effect operations is only really challenging in the dynamically-dispatched scenario, and bluefin does not really support extensible, dynamically-dispatched effects, bluefin can be quite a bit simpler.
Users who want dynamically-dispatched effects can use bluefin-algae, though! All the trickyness wrt. higher-order effects apply there. I’m a bit unsure whether the current implementation supports higher-order effects such as Local above. Perhaps @lysxia knows?

Edit: Since a few days ago, bluefin seems to support extensible (non-control) effects as well. I haven’t reviewed it yet.


Edit: Conclusion: bluefin and effectful seem to offer extensible non-control effects. For control effects use bluefin-algae, but be aware that performance of control0# might be insufficient unless someone implements segmented stacks in GHC.

Another conclusion is that perhaps effect systems should offer a means to annotate effect declarations with the class of effect. I.e., whether it is a control effect (so is not tail-resumptive), whether it is aborting (i.e. discards the continuation like raiseIO#), or multi-prompt etc. That would allow for suitably constrained dynamic implementations that would still perform well. I believe specialisation could be achieved by a suitable type class algebra. Aha, indeed this is exactly what Koka does! There you declare the generality of an effect (fun for tail-resumptive, ctl for control-effect). This remark was also pretty interesting.

6 Likes

Since a few days ago, bluefin seems to support extensible (non-control) effects as well. I haven’t reviewed it yet.

Those seem to be some new primitives in bluefin’s “internal” API. I forgot there is this API which dates from way back. Relatedly, I also have a PR for a public API of such “extensible non-control effects”.

If GHC had segmented stacks like OCaml seems to have, we could make it near zero-cost

Here’s a funny idea about that. At the end of the day, OCaml uses segmented stacks to enable concurrency. In Haskell, we already have concurrency. Let’s think backwards: why not use threads to emulate segmented stacks? I can think of so many drawbacks it must be the worst idea I’ve ever had but it is still terribly funny to think about it.

delimited continuations

I’m still looking for a good application of those, or equivalently, the non-tail-resumptive handlers that bluefin-algae lets you write. I know only about concurrency and backtracking as vague directions, but beyond that I’m blanking on more convincing examples.

The issues of performance (repeatedly copying the stack) and safety of multishot continuations wrt. bracket seem extremely tricky to address in a Haskell library, as opposed to languages with specialized type systems and runtimes. I wonder if there is a less naive way to go about using the delcont primitives.

1 Like