What is a higher-order effect?

Couldn’t you implement the same logic with a static reader effect? That is, instead of implementing a new dynamic effect SomeAction, why not define someAction = ask >>= SomeActionWithCtxt, then interpret the static effect SomeActionWithCtxt r without running ask? So I wouldn’t say that dynamic effects are absolutely mission-critical for your use case.

Not sure if that’s helpful or not, but the “substitution metaphor” sounds a bit like the difference between effectful by-name evaluation (dynamic) vs. effectful by-value evaluation (static). In terms of this thread, static effects seem to be “squeezed dry” at the interpretation site before substitution, whereas the interpretations of dynamic effects are substituted wholesale into use sites.

1 Like

That’s how it seems to me, but I welcome more detailed examples.

I agree that is a good way of thinking about it. I’d still really like to know whether the definition of “higher order effects” includes that they must be dynamically resolved, or whether the static/dynamic distinction is a separate axis.

1 Like

GitHub - sayo-hs/heftia: higher-order algebraic effects done right for Haskell (thanks for the link :)) seems to consider the dynamic/static distinction separate from higher-order effects.

GitHub - sayo-hs/heftia: higher-order algebraic effects done right for Haskell lists MonadUnliftIO, in particular bracket, catch, and others as examples of higher-order effects. I think a higher-order effect may simply be an effect operation that takes effectful computations as argument.

Reading the abstract of https://dl.acm.org/doi/pdf/10.1145/3571255, it appears that when a higher-order effect composes two separately compiled bits of code, these bits of code need to dynamically interact with each other in some way.

I understand this as follows. For example (roughly, haven’t type-checked)

withFile fileName f = bracket (openFile fileName) (closeFile fileName) f  

code withHandle n = catch (withHandle $ \h -> if n > 42 then throwIO ... else return 3) (\_ -> return 4)

main = code (withFile "/tmp/blah") 2300 >>= print

Note that withFile and code can be compiled in separate modules, and code throws an exception type that cannot be caught anywhere else. Then in code, it appears we know exactly what will happen when we call throwIO: We will immediately call the exception handler and return 4.

But now consider what happens in main, where we combine code with withFile. Suddenly, throwIO does not immediately call the exception handler; rather it interacts with the use of bracket in withFile and closes the file handle first. This is how I understand the “higher-order effects break modular reasoning” point of the paper.


Page 3 of the “Hefty Algebras” paper is quite interesting; it defines algebraicity of an effect operation (a notion that seems quite related to “squeezing dry” its effectful computation arguments) and remarks that local is a non-algebraic operation. I’m tempted to think that “algebraic” means “static coincides with dynamic dispatch”.

2 Likes

I’d been hoping to write this post since you gave your talk, but unfortunately I wasn’t able to find the time. A lot of discourse has happened since, and it’s been very interesting! One of my questions in the talk was about local, and there’s been a lot of fascinating points in this thread about it. Something I wanted to raise was a concrete example of where local is practically useful.

Perhaps the biggest use of it in CircuitHub’s codebase has been for tracing instrumentation, and it’s been so useful I’d consider this functionality indispensable. So without further ado, let’s start looking at some code. We are effectful users, so this example is going to use effectful. First, let’s look at the API of the Trace effect.

data Trace :: Effect

inSpan :: (HasCallStack, Trace :> es) => Text -> Eff es a -> Eff es a

Essentially the Trace effect lets us scope a block of code as belonging to an OpenTelemetry tracing span. Clients can use this to instrument there code, for example

doWork :: Trace :> es => Eff es ()
doWork = inSpan "doWork" do
  ...

But another use of this is to use it from within another effect handler:

data Database :: Effect where
  RunQuery :: Query -> Database m QueryResults

runDatabase :: (IOE :> es, Trace :> es) => Eff (Database : es) a -> Eff es a
runDatabase = interpret_ \case
  RunQuery q ->
    inSpan "runQuery" do
       liftIO (performQueryIO q)

This is already quite cool, and let’s us do things like

doWork :: Trace :> es => Eff es ()
doWork = inSpan "doWork" do
  jobs <- runQuery fetchJobsQuery
  ...

We can run this program with something like:

main :: IO ()
main = runEff $ runTrace $ runDatabase $ doWork

The result of this in now when we call doWork we create a new tracing span "doWork" and then within this span we get a child span "runQuery":

  • doWork
    • runQuery

But when we think about it, this is quite remarkable action at a distance! The Database interpreter knows nothing about doWork creating spans, but somehow everything fits together perfectly.

Even more fascinating is this works even when we split things apart even more:

doWork :: Trace :> es => Eff es ()
doWork = inSpan "doWork" do
  jobs <- getJobs
  ...

getJobs :: Database :> es => Eff es Jobs -- No `Trace` here at all!
getJobs = runQuery fetchJobsQuery

I skipped the implementation of Trace and inSpanat the start, but they work in exactly the same way as Reader and local! The Trace effect is basically a reader effect that has access to the current tracing span (if any), and whenever we call inSpan we lookup the current span and create a new span that is a child of the current span (ala local). We then use local to run the associated block of code with the current span changed to this new parent.

This is really what I wanted to know if bluefin was capable of expressing. It’s this interaction between effects that I find incredibly powerful, and it work out very naturally in effectful. As @michaelpj said, we did have some confusion understanding this at first - we thought that the runQuery method must have had to have the Trace effect itself for this to work - that is, we thought we’d have to have:

data Database :: Effect where
  RunQuery :: Trace :> es => Query -> Database (Eff es) QueryResults

But the real beauty is we don’t have to do this at all - tracing in the interpretation of the Database effect is entirely hidden from users, but if users themselves use tracing it also perfectly interacts with it!

But we do even more with this Reader- like behavior. Another thing that we’ve found really powerful is the ability to capture the current tracing span and then restore it later. This one is a little harder to motivate, but here’s one fairly self-contained example. We have a machine learning inference step. Running the inference is relatively quick, but we also want to archive the inference so we can review it at a later date. This archiving should have tracing spans, and we want to associate this with the inference step, but we don’t actually want to block on archiving - it should happen in the background. Our solution is to have a separate archiving thread that consumes a queue of (InfereneceResult, TracingContext). Whenever we archive an inference, we first reset our context to the given context:

runDetector = reinterpret withSavedInferences \_ -> \case
  Detect image -> do
    inSpan "detect" do
      found <- runInference image

      inferenceResults <- ask @(TQueue PartDetectionInference)
      sourceContext <- getContext
      atomically $
        writeTQueue
          inferenceResults
          Inference
            { sourceImage = image
            , sourceContext
            }

      pure found
  where
    withSavedInferences m = Ki.scoped \ki -> do
      inferenceResults <- atomically $ newTQueue @Inference

      _ <- fork ki do
        forever $ tryAny do
          saveInference =<< atomically (readTQueue inferenceResults)

      runReader inferenceResults m

    saveInference inference = withContext inference.sourceContext $ inSpan "saveInference" do
      ...

Again, if saveInference calls anything that uses Tracing - even if it’s within an effect interpreter like we saw with Database - everything gets associated as expected.

Hopefully that’s given an insight into why, from a practical point of view, one might want this local functionality! What I really like is that I don’t really have to think about how any of this works. I don’t have to think about state or mutation - I just think about how Reader works.

I look forward to hearing if bluefin can do any of this! If this isn’t clear enough, I’m happy to try and put a self-contained effectful example together.

9 Likes

Thanks @ocharles, this was very enlightening! Having thought about this I now understand a lot more about many things.

The short response is: yes, I see why these “dynamic” effects are useful, and yes, Bluefin can do them. See the code below, and the comment about main for the sample trace output.


To elaborate on the utility of dynamic effects, the point is that we want to be able to do this

doWork :: Trace :> es => Eff es ()
doWork = inSpan "doWork" do
  jobs <- getJobs
  ...

getJobs :: Database :> es => Eff es Jobs -- No `Trace` here at all!
getJobs = runQuery fetchJobsQuery

that is, define getJobs without reference to Trace, yet define the handler to create a new span, like

runDatabase :: (IOE :> es, Trace :> es) => Eff (Database : es) a -> Eff es a
runDatabase = interpret_ \case
  RunQuery q ->
    inSpan "runQuery" do
       liftIO (performQueryIO q)

and have the "runQuery" span nest beneath the "doWork" span. (You’ll see the Bluefin implementation achieves this.) This particular example was what made it all click into place for me, so thank you very much!


To elaborate on the implementation in Bluefin, what’s going on is that the reader effect is actually a mutable state (local mutates it on enter and restores it on exit), but given that local is the only thing that mutates the state we have the nice property that these two are always the same:

do
  r1 <- ask
  r2 <- ask
  pure (r1, r2)

do
  r1 <- ask
  pure (r1, r1)

So, in the Bluefin version below I implement Reader using State (i.e. mutable state). This is actually what effectful does too! local from Effectful.Reader.Static uses localStaticRep which modifies Eff's Env in a bracket (actually inlineBracket) that restores it afterwards. That’s the same as my Bluefin version.

This implementation scheme is what @arybczak suggested, with the caveat that “what if you then add forkIO to the equation”? Well, that would be bad. The IORef underlying the State would be accessed by multiple threads and it would go wrong. That’s already a problem for Bluefin though. One mitigation would be to take an effectful-style approach and track all mutable states in Eff. Then they can be cloned when we enter new threads. I don’t know if I’ll do that or just detect when withRunInIO is being used to launch a new thread and abort at run time (like effectful does with the SeqUnlift strategy). That must be pretty inefficient with a lot of states in scope, so I think I’d probably rather just come up with safe native Bluefin concurrency patterns, if that’s practical. We shall see.

EDIT: By the way, this shows that my intuition about not understanding the use case of this feature, and my intuition about Bluefin not being able to support it, were both completely wrong. I got bamboozled because I didn’t understand the importance of it being implemented by mutation under the hood.

Preamble
{-# LANGUAGE GHC2021 #-}
{-# LANGUAGE DerivingStrategies #-}
{-# OPTIONS_GHC -Wall #-}

import Bluefin.Compound
  ( Handle,
    makeOp,
    mapHandle,
    useImplIn,
    useImplUnder,
  )
import Bluefin.Eff
  ( Eff,
    bracket,
    runEff,
    (:&),
    (:>),
  )
import Bluefin.IO (IOE, effIO)
import Bluefin.State (State, evalState, get, put)
import Bluefin.Stream
  ( Stream,
    forEach,
    yield,
  )
import Data.List ()
import Prelude hiding (span)

newtype Reader r e = MkReader (State r e)
  deriving newtype (Handle)

runReader ::
  r ->
  (forall e. Reader r e -> Eff (e :& es) a) ->
  Eff es a
runReader r k = evalState r (k . MkReader)

ask :: (e :> es) => Reader r e -> Eff es r
ask (MkReader st) = get st

local ::
  (e1 :> es) =>
  Reader r e1 ->
  (r -> r) ->
  Eff es a ->
  Eff es a
local (MkReader st) f k = do
  orig <- get st
  bracket
    (put st (f orig))
    (\() -> put st orig)
    (\() -> k)

data Query r = MkQuery [r]

type Job = String

fetchJobsQuery :: Query Job
fetchJobsQuery = MkQuery ["Job 1", "Job 2", "Job 3"]

performQueryIO :: Query a -> IO [a]
performQueryIO (MkQuery as) = pure as
-- > main
-- doWork
--   runQuery
--     A trace inside runQuery
--   The jobs were
--     ["Job 1","Job 2","Job 3"]
main :: IO ()
main = runEff $ \io -> do
  runTrace io $ \tr -> do
    runDatabase tr io $ \db -> do
      doWork tr db

data Database es = MkDatabase
  { runQueryImpl ::
      forall e a.
      Query a ->
      Eff (e :& es) [a]
  }

instance Handle Database where
  mapHandle db =
    MkDatabase
      { runQueryImpl = \q ->
          useImplUnder (runQueryImpl db q)
      }

runDatabase ::
  (e1 :> es, e2 :> es) =>
  Trace e1 ->
  IOE e2 ->
  (forall e. Database e -> Eff (e :& es) a) ->
  Eff es a
runDatabase tr io k =
  useImplIn
    k
    MkDatabase
      { runQueryImpl =
          \q -> do
            inSpan tr "runQuery" $ do
              trace tr "A trace inside runQuery"
              effIO io (performQueryIO q)
      }

runQuery :: (e :> es) => Database e -> Query a -> Eff es [a]
runQuery db q = makeOp (runQueryImpl (mapHandle db) q)

data Trace e
  = MkTrace (Reader Int e) (Stream String e)

runTrace ::
  (e1 :> es) =>
  IOE e1 ->
  (forall e. Trace e -> Eff (e :& es) r) ->
  Eff es r
runTrace io k = runReader 0 $ \re -> do
  forEach
    ( \stream -> do
        useImplIn k (MkTrace (mapHandle re) (mapHandle stream))
    )
    (\line -> effIO io (putStrLn line))

trace :: (e :> es) => Trace e -> String -> Eff es ()
trace (MkTrace re stream) msg = do
  n <- ask re
  yield stream (replicate (2 * n) ' ' <> msg)

inSpan ::
  (e1 :> es) =>
  Trace e1 ->
  String ->
  Eff es a ->
  Eff es a
inSpan tr@(MkTrace re _) span k = do
  trace tr span
  local re (+ 1) k

doWork ::
  (e1 :> es, e2 :> es) =>
  Trace e1 ->
  Database e2 ->
  Eff es ()
doWork tr db = inSpan tr "doWork" $ do
  jobs <- getJobs db
  inSpan tr "The jobs were" $ do
    trace tr (show jobs)

-- No `Trace` here at all!
getJobs :: (e :> es) => Database e -> Eff es [Job]
getJobs db = runQuery db fetchJobsQuery
4 Likes

From a definitional standpoint, a higher-order effect is simply an effect that includes higher-order operations. An effect is a set of operations, and higher-order operations are those that include actions (such as Eff es or m) in their argument positions.

I don’t think there’s any inconsistency in how people use these terms. At the very least, I don’t believe that dynamic scoping behavior is part of the definition of higher-order effects. The definition of higher-order effects is formal and symbolic regarding their data structures and is independent of their behavior, semantics, or functionality—that is, independent of the question of “what can be achieved with them.”

I consider them to be on separate axes. I imagine that there’s a broad framework called higher-order effects, within which countless semantic possibilities exist. Among those, we’re searching for user-friendly and practical semantics that are useful to users.


The functionality of higher-order effects lies in being able to handle higher-order operations (like local or inSpan) within the same framework as first-order operations (like ask or get). In other words, it allows us to uniformly handle both first-order and higher-order operations as first-class effects.

For example, in effectful, thanks to higher-order effects, the interpose function—as shown in this post—can be used not only for first-order operations like ask but also for higher-order operations like inSpan. (The following is pseudocode):

logOnSpan :: (Trace :> es, Log :> es) => Eff es a -> Eff es a
logOnSpan = interpose \(InSpan spanName action) -> do
    log $ "[logOnSpan] Start " <> spanName
    inSpan spanName action
    log $ "[logOnSpan] End " <> spanName

main = runTrace $ runLog $ do
  inSpan "scope1" do
    logOnSpan do
      inSpan "scope2" do
        putStrLn "hello"

{-
> main
[logOnSpan] Start scope2
hello
[logOnSpan] End scope2
-}

I suspect that currently, bluefin's framework for implementing effects is quite different from existing ones, and It might be facing difficulties in neatly finding counterparts to higher-order effects within bluefin. If we were to say that “bluefin supports higher-order effects,” I think that would mean—as mentioned in this discussion—that not only the behavior of ask but also that of local can be modified afterward by changing the passing value-level local effect.

Initially, I thought that bluefin was propagating evidence (effect handlers) via arguments instead of storing the evidence in a vector, as is done in so-called evidence passing. In this method, for example, the Reader effect type—including local—would be defined as follows:

data Reader r es = Reader
  { ask   :: Eff es r
  , local :: forall a. (r -> r) -> Eff es a -> Eff es a
  }

However, when I looked into how custom effects are implemented using the Compound module, it seemed different from what I had imagined. Perhaps passing effect handlers, rather than using IORef, enables more functionality. But since I’m not very familiar with bluefin's mechanisms, please correct me if I’m mistaken.

I learned quite a lot about extensible effects in the past few days and the puzzle pieces are slowly coming together. Let me summarise.

Dynamic vs. Static effects

I think part of the confusion is this: When people talk about effect systems, it is unclear whether they mean a type system that tracks effects or one which allows to extend the language with custom effects, the implementation of which is determined by a (usually lexically-enclosing) handler. These languages often implement effects and handlers by translating to delimited control operators with special support in the runtime. In particular, delimited control operators allow for custom control effects that may choose to discard the continuation and thus implement stack unwinding.

Haskell libraries such as effectful provide the capability to define such extensible effects and their corresponding handlers. I think an extensible effect is what effectful calls a “dynamically-dispatched” effect, because an effectful program is given meaning dynamically depending on the installed effect handlers.
Edit: However, I think effectful does not support extensible control effects, only effects that do not need delimited continuations.

It is simple to implement Reader, State, Exception, … effects wrt. delimited continuations (which is neither what effectful nor bluefin does).
NB: Effects are actually (uses of) classes, not instances, so the mtl equivalent would be MonadReader, MonadState, MonadThrow. Effectful programs have their semantics fixed by providing them with the corresponding effect handlers (instances at ReaderT, StateT, …).
It is a reality that GHC’s specialiser gives up at some point. If effectful programs cannot be specialised for their handlers, they will be quite inefficient! Yet still much much more efficient than the alluded unspecialised mtl-style encoding, where even (>>=) cannot be specialised.
I think this was the reason that @lexi.lambda wanted to introduce delimited control ops: So that inevitably unspecialised/dynamically-dispatched control-effectful programs still have decent performance in the absence of any custom effect.

Yet, the unspecialised, dynamic effect handler implementation of effects such as Reader, State and Exception in terms of delimited continuations is pretty inefficient, because any operation will need to copy a fragment of the stack, as required by delimited control operators and quite unnecessary for something as simple as ask. Therefore, effectful eschews control operators and furthermore came up with the notion of statically-dispatched effects, which is basically a manual specialisation of effects for a particular effect handler.
These statically-dispatched effects are not implemented in terms of inefficient delimited control ops nor dynamic handles, but in terms of specialised implementations. See also the next section.

As I understand it, bluefin takes the pragmatic stance that people are unlikely to truly need extensible/dynamic effects (question (2) in the OP is exactly about that), and thus only provides static implementations.Case in point: there is not a single occurrence of prompt#/control0# in its repo. It provides means to write custom Compound effects for building larger effects out of the builtin ones, though. Great ergonomics and usefulness of its API aside, I would say it is an effect-tracking library, but not an extensible effects library. effectful is in the latter category.
Edit: This statement was misguided; it appears that bluefin simply does not support extensible control effects and hence does not use delimited continuations, but that’s just like effectful.
bluefin-algae, on the other hand, adds control effects to bluefin as well by providing a wrapper around the delimited control primops.

Specialising for known effect handlers implemented using delimited control

To get a feeling for what is inefficient about the dynamic delimited control implementation: The usual effect handlers for Reader, State, etc., that correspond to mtl instances for ReaderT and StateT, etc., are tail-resumptive. This means their effect handlers do not actually modify the stack.
For example, ask (MkReader tag ref) = control0 tag (\k -> k (readIORef ref)) is equivalent to readIORef ref, and that is at least its static implementation in bluefin.
In general, implementing tail-resumptive handlers in terms of control0# tag (\k -> k e) (which copies the stack into the heap just to immediately resume it) is absolute overkill! Hence I propose to add a rewrite rule to GHC here: #25490: Delimited Continuations: Optimise tail-resumptive operations · Issues · Glasgow Haskell Compiler / GHC · GitLab.
Likewise, Exceptions never resume, meaning they unconditionally discard the continuation k (control# tag (\_k -> handle exn), NB: _k dead), so it is absolute overkill to copy the stack as well.
Hence I propose to add a new primop to GHC that simply aborts the prompt and does not copy the stack at all (pretty much like raiseIO#), plus a corresponding rewrite rule: #25489: Delimited Continuations: Implement `abort#` to avoid copying the continuation · Issues · Glasgow Haskell Compiler / GHC · GitLab.

Net result: Reader, State and Exception can be implemented dynamically just as efficiently as statically. Now, I would like to see GHC improve such that any known, dynamic implementation is as efficient as its static counterpart. This would ensure that dynamic effects implemented by users can be just as efficient as the builtin static ones. Fixing the issues linked above should go some way towards that. However, I can see two problems:

  1. I’m a bit skeptical about custom Exception, because prompt/control0 is not expressive enough to express async-exception safe bracket.
  2. Imagine a computation using a Reader effect, compiled separately from the vanilla Reader effect handler. In order to get good code for the effectful computation, we would need to specialise it for the effect handler/capability. That is likely to fail at some point when the computation becomes too large. So it seems the effectful design where we can choose between dynamic and static dispatch seems quite important in practice, because AFAIU static dispatch ensures that we do not need to do a costly control0#.

That said, I have mixed feelings about control0#. Since it is not zero-cost and I almost never need the full generality that it offers, I would try to avoid it at all cost, which is impossible when specialisation fails. If GHC had segmented stacks like OCaml seems to have, we could make it near zero-cost, perhaps so much that it doesn’t matter much when specialisation fails. Perhaps that would allow effectful and bluefin to offer extensible control effects as well.

Higher-order effects

Higher-order effects are effect ops that take effectful computations as arguments, for example local.
It is a bit challenging to support extensible (dynamically-dispatched) higher-order effect ops.
NB: local is supposed to be an effect op and not a derived implementation in terms of ask.
In effectful, I think you would write

  data Reader r :: Effect where
    Ask  :: Reader r m r
    Local :: (r -> r) -> m a -> Reader r m a

and then have a handler (roughly, not type-checked.)

runReaderVanilla r m = newIORef r >>= \ref -> interpret_ go m 
  where
    go Ask = liftIO (readIORef ref)
    go (Local f m) = do
      r <- liftIO $ readIORef ref
      liftIO $ writeIORef ref (f r)
      a <- m
      liftIO $ writeIORef ref r
      return a

(I think I must actually use interpret instead and do something like a <- localSeqUnlift env $ \unlift -> unlift m instead of a <- m. Bother.)

This is just one effect handler for Reader. Others (i.e. logging ones) are conceivable.
Note how the encoding of Local as part of the Reader data declaration is not entirely trivial, and how I should have actually used interpret instead of interpret_

If your effect is statically-bound, there is just one effect handler. Any occurrence of local can be specialised for that effect handler and you do not need to think long and hard about how to encode higher-order effects or what they mean.
This also means you cannot monkey-patch the definition of local after the fact to support logging.
(But as Tom has demonstrated, there are often different ways to introduce seams that achieve logging etc.)

My personal summary regarding bluefin: since the encoding of higher-order effect operations is only really challenging in the dynamically-dispatched scenario, and bluefin does not really support extensible, dynamically-dispatched effects, bluefin can be quite a bit simpler.
Users who want dynamically-dispatched effects can use bluefin-algae, though! All the trickyness wrt. higher-order effects apply there. I’m a bit unsure whether the current implementation supports higher-order effects such as Local above. Perhaps @lysxia knows?

Edit: Since a few days ago, bluefin seems to support extensible (non-control) effects as well. I haven’t reviewed it yet.


Edit: Conclusion: bluefin and effectful seem to offer extensible non-control effects. For control effects use bluefin-algae, but be aware that performance of control0# might be insufficient unless someone implements segmented stacks in GHC.

Another conclusion is that perhaps effect systems should offer a means to annotate effect declarations with the class of effect. I.e., whether it is a control effect (so is not tail-resumptive), whether it is aborting (i.e. discards the continuation like raiseIO#), or multi-prompt etc. That would allow for suitably constrained dynamic implementations that would still perform well. I believe specialisation could be achieved by a suitable type class algebra. Aha, indeed this is exactly what Koka does! There you declare the generality of an effect (fun for tail-resumptive, ctl for control-effect). This remark was also pretty interesting.

6 Likes

Since a few days ago, bluefin seems to support extensible (non-control) effects as well. I haven’t reviewed it yet.

Those seem to be some new primitives in bluefin’s “internal” API. I forgot there is this API which dates from way back. Relatedly, I also have a PR for a public API of such “extensible non-control effects”.

If GHC had segmented stacks like OCaml seems to have, we could make it near zero-cost

Here’s a funny idea about that. At the end of the day, OCaml uses segmented stacks to enable concurrency. In Haskell, we already have concurrency. Let’s think backwards: why not use threads to emulate segmented stacks? I can think of so many drawbacks it must be the worst idea I’ve ever had but it is still terribly funny to think about it.

delimited continuations

I’m still looking for a good application of those, or equivalently, the non-tail-resumptive handlers that bluefin-algae lets you write. I know only about concurrency and backtracking as vague directions, but beyond that I’m blanking on more convincing examples.

The issues of performance (repeatedly copying the stack) and safety of multishot continuations wrt. bracket seem extremely tricky to address in a Haskell library, as opposed to languages with specialized type systems and runtimes. I wonder if there is a less naive way to go about using the delcont primitives.

1 Like

Bluefin uses threads to implement connectCoroutines. It seems to work well. What drawbacks were you thinking of?

1 Like

delimited continuations

I’m still looking for a good application of those

Some practical applications that I can think of:

  • Implementing concurrency libraries and tools, like dejafu.
  • Resumable exceptions.
  • Coroutines, which are useful for many things. You can make a parser written in the usual style to work incrementally, or make it yield one parse event/token at a time (instead of a list of things in one go). Some state machines can be implemented more conveniently as coroutines.

I think without control flow effects, the effect system becomes a dependency injection framework/library, with some extra features, like local discussed above.

3 Likes

Bluefin uses threads to implement connectCoroutines. It seems to work well. What drawbacks were you thinking of?

Rather than implementing specific effects, the problem I’m talking about is to implement algebraic effects (or delimited continuations) in general. The goal is to define a monad Free implementing the interface:

instance Monad (Free f)  -- return, (>>=)
send :: f a -> Free f a
interpret :: (forall x. f x -> (x -> Free g a) -> Free g a) -> Free f a -> Free g a
data Empty a
runFree :: Free Empty a -> a

satisfying various equations about how those functions interact, the most notable one being

interpret h (send f >>= k) = h f (interpret h . k)

There are many known ways to do this: a common starting point is the Free data type. Another way is using a higher-order effect from any one of the fancy effect libraries that support those (I think this should be possible in bluefin as well):

data SendBind f m a where
  SendBind :: f x -> (x -> m a) -> SendBind f m a

-- Simplified variant of https://hackage.haskell.org/package/effectful-core-2.5.0.0/docs/Effectful-Dispatch-Dynamic.html#v:interpret
interpret :: (forall x. h (Eff g) a -> Eff g a) -> Eff h a -> Eff g a
interpret :: (forall x. SendBind f (Eff g) a -> Eff g a) -> Eff (SendBind f) a -> Eff g a -- specialization

However implementing algebraic effects this way doesn’t buy you much compared to the naive Free data type, because continuations are represented as closures (x -> m a) which cannot be eliminated unless you know the handler has a nice shape. Another way to look at this is that although you may define Free f = Eff (SendBind f) to answer the question above, you have to use a different Monad instance than Eff's to satisfy the required equations; so you don’t benefit from the fact that Eff is just IO under the hood, which is core to the performance of Eff.

The updated problem statement is to implement Free where (>>=) behaves the same as in IO or ReaderT _ IO.

The primitives prompt#/control0# give such a solution. The cost in exchange is that send :: f a -> Free f a, which is really control0#, traverses the stack to copy it and find a matching prompt#. That’s not great.

OCaml palliates this issue using “segmented stacks”: the stack is represented as a linked list so instead of copying “stack segments” we manipulate pointers; and handlers are always at the end of a segmented stack, so you can find the closest handler in constant time without searching frame by frame.

My crazy idea is that GHC already has something similar to segmented stacks: threads. The irony is that OCaml uses segmented stacks to achieve concurrency, and here I’m doing the opposite. Each thread has its own stack and they can pass control to each other by waiting on MVars. With that, it should be possible to implement algebraic effects in IO with reasonable asymptotics.

In the code linked below, client code ((>>=) and send) simply consists of MVar operations. On the other hand, the handler interpret is more complicated, but optimizing the client does not rely on knowing the handler and vice versa.

Gist: Algebraic effects implemented using threads · GitHub

3 Likes

Oh I see. Are you sure you can implement multi-shot continuations using threads?

No, that’s one of the many catches :slight_smile:

Using threads sounds like a neat idea, because it means we do not need to copy the stack, in contrast to control0#. Still, forking and creating MVars is not free… I guess we would need to look at benchmarks.
For example, when using the forking strategy to implement effects in concurrency, I imagine that the number of threads spawned will at least double because every send will spawn a new thread, even for effects that are not ForkIO.

Let me try to summarise the different implementation strategies based on effect class, so that we know what cases we need to optimise for:

  1. Users should be able to declare that an effect (rather than just one of its handlers) is supposed to be tail-resumptive (i.e., fun in Koka), then we do not need to bother with control0# in this case and get great perf. I would think that the vast majority of effect use cases is tail-resumptive. Ömer called providing handlers for extensible non-control effects “dependency injection” and I’m inclined to agree.
  2. If users request a control effect, they opt into some overhead, be it through extraneous forkIO and MVars or through use of control0#.
    Whether forking or control0# is faster for single-prompt delconts is an open question
  3. People who want multi-prompt delconts will (probably?) need to keep using control0#.

Coming back to the topic of higher-order effects, I don’t find local to be a very motivating example. An “operation that takes computation as arguments” seems very fancy at first, but there’s not a lot that can be done with an argument which is a computation: basically you can run the computation, in which case a handler for a higher-order effect boils down to some extra code that you run before and after the given computation. We may also come up with handlers that run the computation more than once, or under some additional handlers, and I conjecture that there are not that many “shapes” for handlers of higher-order effects, and that once we have enumerated them all, we will find that they can be desugared in terms of first-order effects.

In the simplest case where all existing higher-order handler are assumed to be “brackets” around the argument computation, they can be desugared to two first-order effects to call before and after the computation. That is enough to express the examples mentioned above: the standard handler for local (by modifying the state and then restoring it) and the logOnSpan example that inserts logging before and after the argument action.


@tomjaguarpaw By including local in bluefin’s Reader effect you gain expressiveness but lose “reasonability”: with only ask, we immediately knew that all uses of ask bound to the same handler will return the same value; with local we now need to keep track of local on the call stack to determine the value of ask. The point is that "ask only" and “ask+local” are two meaningfully different interfaces. Without local, we had one neat equation:

runReader v (\c -> f (ask c)) = f (pure v)

I’m not sure how strongly I am going to push for an "ask only" interface to be part of bluefin (maybe passing values directly works just as well?), but for the sake of argument, consider this similar situation: why even have “ask+local” (Reader) when “get+put” (State) is more expressive anyway?


One last pet peeve I’d like to talk about is how overloaded and confusing the adjectives “dynamic” and “static” are. There at least three possible meanings in this discussion already.

  1. First, there is the meaning of Effectful.Dispatch.Dynamic/...Static, which is a distinction which could also be made in bluefin as well. To set the scene, effectful and bluefin are two realizations of the “handle pattern”: “handles” are resources created by “handlers” (interpret, runState, etc.), and there are operations that use those “handles” to do stuff.

    But what is a handle actually? You could say that a handle is data: it could be a value or reference that you get/set, a file descriptor which lets you do IO, a flag to enable a feature, etc. We might call this the “static handle” model, and that is what Dispatch.Static deals with. Or you could say that a handle is executable code: the send operation executes that code. We might call this the “dynamic handle” model: Dispatch.Dynamic. Instead of “dynamic” vs “static”, one could call this distinction “handles as code” vs “handles as data”.

    These are two complementary models: dynamic handles generalize static handles, as closures over the relevant data; and static handles generalize dynamic handles since code is data. I find this distinction to be the least interesting because it is “just” a special case of an ubiquitous code-data duality.

  2. The second “dynamic/static” distinction is the one that separates effectful and bluefin. It is a question of scope, by analogy with variable binding: how to determine what handler handles what operation? For a precise example, consider an expression h (f g1 g2) with a handler h :: Eff (e : es) x -> Eff es x, two operations g1, g2 :: Eff (e : es) x of effect e, and some function f :: Eff (e : es) x -> Eff (e : es) x -> Eff (e : es) x. Does h handle the operations g1 and g2?

    With dynamically scoped handlers (effectful), the answer depends on f: it could be that f handles the operations g1 or g2 or both. With statically scoped handlers (bluefin), the expression actually looks more like h (\c -> f (g1 c) (g2 c)), and the use of the variable c guarantees that g1 and g2 are handled by h.

    As a digression, note that dynamically scoped handlers can emulate static scoping using polymorphism: if f has type forall e. Eff (e : es) x -> Eff (e : es) x -> Eff (e : es) x then e is abstract to f which prevents f from handling it. However the details can be subtle: in some effect systems (such as one where all effects are defined in terms of what effectful calls “(dynamic or static) dispatch” (see above)), f could handle an abstract operation, send it upwards (because that’s the only thing it could do with e), and insert some actions before and after the sending. Thus, f may detect that some operations were called, which may or may not be a desirable side channel.

  3. The third meaning is when people use “dynamic” to refer to interpose :: e :> es => Handler e es -> Eff es x -> Eff es x, which “dynamically” installs a new handler for e, hiding an older handler. This may seem closely related to the previous point: in the example expression h (\c -> f (g1 c) (g2 c)), surely f can’t interpose a handler to hide h, because it is statically bound to its operations via c. However you can choose to explicitly allow interposition by rewriting that example to thread the handle c through f: h (\c -> f c (\c' -> g1 c') (\c' -> g2 c').

    Another way to approach the same idea: if we name effectful’s monad Eff and we name bluefin’s monad B, then we can morally view Eff in terms of B as Eff es x = Hdls es -> B es x, where Hdls is the type of handles for the effect row es. The type of interpose unfolds to e :> es => Handler e es -> (Hdls es -> B es x) -> (Hdls es -> B es x), which makes explicit how handles are threaded through it.

    Note that this is not the only way to “dynamically change the handler” in the statically scoped world. Another technique is to make handles Hdls mutable, which enables a variant of interpose that works even when the handle is passed directly from a handler to its operations without a middle-man. Going back to the running example from earlier, you could write h (\c -> f c (g1 c) (g2 c)) to pass a mutable handle c to f and let it “act at a distance” on g1 and g2.

3 Likes

Right, but I’m beginning to think the ask+local Reader interface is the Reader interface. I previously (naively) assumed that the notion of Reader was just an abstraction of (->), i.e. threading an immutable value around. After this discussion I begin to believe that local modifications of an otherwise immutable value are an essential component of that API.

It does satisfy a nice property though: these two operations are always equivalent:

do
  r1 <- ask re
  r2 <- ask re
  pure (r1, r2)
do
  r <- ask re
  pure (r, r)

(and maybe an even more powerful equivalence can be stated).

2 Likes

The short answer is, yes, Bluefin can do that! See example below.

This is very interesting to me. First I thought that reader state should be immutable. Then I realized that the point of local is to mutate the reader state in a well-scoped way. Now I realize that local is not immutable either, and interpose can be used to modify it in a well-scoped way. My next thought is: why stop at local? Why not allow interpose of local itself to be modified in a well-scoped way? And then … where do we stop? Can’t we do that indefinitely, at each step allowing ourselves to modify the thing that modifies the level below?

Unfortunately it was too vague for me to be able to replicate it in Bluefin. I would be grateful if you could write a small working sample in effectful that demonstrates the nature of interpose modifying local in a Reader effect, so that I can confirm that what I have implemented in Bluefin is a match for it.


Requires unpublished Bluefin: bluefin/bluefin-internal/src/Bluefin/Internal.hs at 5bf5efad6620f51ae98d417f6a8f9c925abfba5b · tomjaguarpaw/bluefin · GitHub

-- ghci> example
-- inside first local
-- local will add trace local1
-- Entering modified local: local1
-- inside second local
-- local will add trace local2
-- Entering modified local: local2
-- Entering modified local: local1
-- inside third local
example :: IO ()
example = runEff $ \io ->
  forEach
    ( \y -> do
        runDynamicReader () $ \re -> do
          dynamicLocal id re $ do
            yield y "inside first local"
            withTracingLocal y re "local1" $ do
              yield y "local will add trace local1"
              dynamicLocal id re $ do
                yield y "inside second local"
                withTracingLocal y re "local2" $ do
                  yield y "local will add trace local2"
                  dynamicLocal id re $ do
                    yield y "inside third local"
    )
    (effIO io . putStrLn)
1 Like

I wondered this, and the answer is because it’s less expressive. It’s not such a big deal when the reader state is non-effectful, but when it contains effects it becomes essential! For example, in a Bluefin branch I have a handle called H, which is essentially a “reader of a handle”. It supports the following operations. In particular, localH allows us to set the reader state locally to a value that is only permissible within the block (contains effects es), not outside (only effects e)!

localH :: (e :> es, Handle h) => H h e -> h es -> Eff es r -> Eff es r

askH :: (e :> es, Handle h) => H h e -> Eff es (h es)

runH ::
  (e1 :> es, Handle h) =>
  h e1 ->
  (forall e. H h e -> Eff (e :& es) r) ->
  Eff es r

That’s a sharp insight. I believe this idea—that higher-order effects can eventually be desugared into first-order effects—is precisely the concept presented in the Hefty Algebra paper. In the paper, they refer to the desugaring operation that lowers higher-order effects into first-order ones as elaboration, which I feel perfectly captures the nuance. When I think about higher-order effects, I naturally have this mental model as well.

I agree. I feel that we should assign specific terms to each of these three (or perhaps even more?) dynamic/static distinctions.

(I reposted this because I mistakenly specified the reply destination. Sorry.)

Note that the elaboration in the Hefty Algebras paper refers to rewriting the higher order operations to uses of handlers (usually), not to inserting first-order operations before and after the higher order block. So, I think you and @Lysxia are not exactly on the same page.

1 Like