Solving cyclic boolean implications with pure code and laziness

TL;DR: It seems we can build a pure data structure and API that can handle cyclic equations, but I am not entirely sure, so please try to break it.

Lazyness does not do magic

A while ago I head a job where I was writing Ocaml, which isn’t a pure language, so you can pull some neat tricks using mutable references, without making your whole code monadic.

In particular, I was writing a compiler pass that tries to determine which expressions are statically known in some sense. I want to annotate each subexpression and variable with a boolean flag, and (more or less) a value is statically known if all its subexpressions are. This would be a very simple programing task in a pure language, simply using True, False and and :: [Bool] -> Bool if it were not for recursion: Variables can have right-hand side that mention that variable. Doing this naively with just pure data will obviously not work: Code like

ghci> let x = and [y]; y = and [x, False] in x

does not terminate.

Monadic code is easy

In the Ocaml world I solved this by introducing a reference to a “monotonously mutable bool”, which can either be surely False, surely True, or we don’t know yet, and if we don’t know yet, we keep track of which other such “monotonously mutable bool” values will have to be made false if this one turns false.

If I port this code to Haskell, it might look like this (You might have to scroll the listing to see it all):

{-# LANGUAGE LambdaCase #-}
module Experiment where

import Data.IORef
import System.IO.Unsafe

-- The imperative part

data PotentiallyBool
    = SurelyTrue -- this is mostly an optimization
    | SurelyFalse
    | MaybeFalse [LBool] -- stores pointers to dependent variables

type LBool = IORef PotentiallyBool

surelyTrue :: IO LBool
surelyTrue = newIORef SurelyTrue

surelyFalse :: IO LBool
surelyFalse = newIORef SurelyFalse

maybeFalse :: IO LBool
maybeFalse = newIORef (MaybeFalse [])

setFalse ::  LBool -> IO ()
setFalse l = readIORef l >>= \case
    SurelyTrue -> error "setFalse: Argument already surely True"
    SurelyFalse -> pure ()
    MaybeFalse ls -> do
        -- Do this first, this breaks cycles
        writeIORef l SurelyFalse
        -- Now notify the dependencies
        mapM_ setFalse ls

implies :: LBool -> LBool -> IO ()
implies l1 l2 = readIORef l2 >>= \case
    SurelyTrue -> pure ()
    SurelyFalse -> setFalse l1
    MaybeFalse ls -> writeIORef l2 (MaybeFalse (l1 : ls))

-- this is only safe if it is run after all “requiredFor” have run
canBeTrue :: LBool -> IO Bool
canBeTrue l = readIORef l >>= \case
    SurelyFalse -> pure False
    _ -> pure True

This way we can write monadic code that creates these mutable boolean variables, registers the implications between them, and eventually reads off the result:

ghci> do { x <- maybeFalse; y <- maybeFalse; x `implies ` y; y `implies` x; canBeTrue x }
ghci> do { x <- maybeFalse; y <- maybeFalse; x `implies ` y; y `implies` x; f <- surelyFalse; y `implies` f; canBeTrue x }

Note that it is crucial to write SurelyFalse in setFalse before notifying all the dependencies.

Can we have it pure?

So far so good. But it is really quite annoying having to turn a nice elegant pure traversal of your AST into a monadic beast just for that. This made me wonder: Can we wrap this IO-infested API into an elegant and pure API, providing the combinators mentioned above (true, false and and), while still being safe?

It seems it is possible!

I first need a helper data structure to deal with thunks of code that may be recursive:

type Thunk = IORef (Either (IO [RunningThunk]) RunningThunk)
type RunningThunk = MVar ()

thunk :: IO [RunningThunk] -> IO Thunk
thunk act = newIORef (Left act)

doneThunk :: IO Thunk
doneThunk = do
    done <- newMVar ()
    newIORef (Right done)

kick :: Thunk -> IO [RunningThunk]
kick d = readIORef d >>= \case
    Left act -> do
        inProgress <- newEmptyMVar
        writeIORef d (Right inProgress) -- so that it is kicked at most once
        mvs <- act                      -- Perform the action
        putMVar inProgress ()           -- Mark this as done
        pure mvs
    Right mv -> pure [mv]

force :: Thunk -> IO ()
force t = do
    mvs <- kick t
    mapM_ readMVar mvs

The idea is that

  • thunk act does not evaluate act yet. This is important to get the laziness we need.
  • force (thunk act) then performs the IO action act.
  • act can require other thunks to be peformed. In order to break cycles, it is expected to just kick, not force, the other thunk, and just that RunningThunk
  • This way, the top-level force can first kick off all execution, and then wait for all to happen.

It is crucial that force marks the thunk as in progress before running the thing inside. This way I can have such thunks refer to each other and still be able to resolve them:

ghci> :{
mdo x <- thunk (putStrLn "Forced x" >> kick y);
    y <- thunk (putStrLn "Forced y" >> kick x)
    putStrLn "Starting now…"
    force x
Starting now…
Forced x
Forced y

With this, and of course some unsafePerformIO, we can now attempt to create an API around a pure value that allows us to tie the knot and still get proper answers.

data PBool = PBool LBool Thunk

The idea is that the LBool in the PBool is available right away to be used to build other PBool values, and the Thunk defers all the work still possibly pending before the PBool is “done”. So we get to this implementation for the API.

pTrue :: PBool
pTrue = unsafePerformIO $ do
    l <- surelyTrue
    t <- doneThunk
    pure $ PBool l t

pFalse :: PBool
pFalse = unsafePerformIO $ do
    l <- surelyFalse
    t <- doneThunk
    pure $ PBool l t

pand :: [PBool] -> PBool
pand [] = pTrue
pand xs = unsafePerformIO $ do
    l <- maybeFalse
    d <- thunk $ concat <$> mapM (\(PBool b t) -> l `implies` b >> kick t) xs
    pure $ PBool l d

unPBool :: PBool -> Bool
unPBool (PBool l d) = unsafePerformIO $ do
    force d
    canBeTrue l

and indeed, it seems to work: I can build my booleans using pTrue, pFalse and pAnd, and even if I have recursion (through pand), it still finds the correct answer (defaulting to True if possible):

ghci> let x = pand [y]; y = pand [x, pTrue] in unPBool x
ghci> let x = pand [y]; y = pand [x, pFalse] in unPBool x
ghci> let x = pand [y]; y = pand [x] in unPBool x

Of course this only works if we have data-dependencies, not if we have control dependencies, or else are too strict:

ghci> let x = pand [y]; y = pand [x, if unPBool x then pTrue else pFalse] in unPBool x

So it seems if we just expose the PBool API (without the PBool constructor), we have built a nice pure API for booleans that still works even if we use it recursively. In particular, were I to re-implement my compiler analysis in Haskell, I could implement this pass with pure non-monadic code.

Questions and observations

  1. Is the API really sound, in the sense that if x = pand ys, then unPBool x == all unPBool ys, no matter how I build these PBools and where I may force unPBool?

  2. Does it break desirable properties of our nice pure language, like Referential Transparency?

  3. Does it break in the presence of threads and concurrency?

    Certainly all the code using readIORef and putIORef needs to be made atomic. But I assume even then the above code is broken if two threads race to force and kick a network of Thunks. Can this be fixed?

  4. I expect this can be extended to much more general data flow problems than just booleans and implication. Essentially anything of the pattern “define a bunch of objects – note how they are related – solve – read results”, including constraint-based type checking.

    The question will be: Under which circumstances is the result still pure? For example, is likely necessary that it it does not matter in which order the relations between the objects are registered, and in which order they are resolved.

  5. How could I prove that the API is indeed safe and pure? Reminds me a bit of the question of how to prove that runST is pure, answered in the POPL18 paper by Amin Timany et al. But that work doesn’t deal with concurrency yet.

  6. Whose wheel did I just re-invent?


I’m getting propagators and monotone frameworks (I couldn’t quickly find a better resource) vibes from this, but I haven’t read your post in full detail. Also lvish might deserve a mention.

1 Like

Right, I think the monadic LBool stuff is a very simple instance of these more general frameworks. If anything, then the wrapping into a pure API is noteworthy here (and hopefully generalizes to more general instances)

You might also be able to build this on top of something like Ed’s promises library.

I worked a bit on the subproblem of a thread-safe implementation of the Thunk data type.

The goal is to have this abstract API:

type Thunk
type KickedThunk
thunk :: IO [KickedThunk] -> IO Thunk
doneThunk :: IO Thunk
kick :: Thunk -> IO KickedThunk
force :: Thunk -> IO ()

with the properties that

  • thunk is lazy in its argument, and does not run it directly
  • the fist kick triggers execution of the action passed to thunk
  • the action is run at most once
  • force triggers execution of the action passed to thunk, and does not return until
    • the action is executed
    • the action of any thunk kicked by the action is execution
  • Cycles are allowed: The action passed to thunk may kick that thunk
  • It is thread safe: Even if multiple threads force or kick related thunks, all actions are still run at most once, and all calls to force terminate (no deadlock)

Especially the last point is not so trivial, if two threads start processing the same strongly connected component from different ends. One solution would be to use the ThreadId to choose which thread gets to explore the full strongly connected component, while the other then just waits.

So this code might work:

type Thunk = MVar (Either (IO [KickedThunk]) KickedThunk)
data ResolvingState = NotStarted | ProcessedBy ThreadId (MVar ()) | Done
data KickedThunk = KickedThunk (MVar [KickedThunk]) (MVar ResolvingState)

thunk :: IO [KickedThunk] -> IO Thunk
thunk act = newMVar (Left act)

doneThunk :: IO Thunk
doneThunk = do
    mv_ts <- newMVar []
    mv_s <- newMVar Done
    newMVar (Right (KickedThunk mv_ts mv_s))

-- Recursively explores the thunk, and kicks the execution
-- May return before before execution is done (if started by another thread)
kick :: Thunk -> IO KickedThunk
kick t = takeMVar t >>= \case
    Left act -> do
        mv_thunks <- newEmptyMVar
        mv_state <- newMVar NotStarted
        let kt = KickedThunk mv_thunks mv_state
        putMVar t (Right kt)

        kts <- act
        putMVar mv_thunks kts
        pure kt

    -- Thread was already kicked, nothing to do
    Right kt -> do
        putMVar t (Right kt)
        pure kt

wait :: KickedThunk -> IO ()
wait (KickedThunk mv_deps mv_s) = do
    my_id <- myThreadId
    s <- takeMVar mv_s
    case s of
        -- Thunk and all dependences are done
        Done -> putMVar mv_s s
        -- Thunk is being processed by a higher priority thread, so simply wait
        ProcessedBy other_id done_mv | other_id < my_id -> do
            putMVar mv_s s
            readMVar done_mv
        -- Thunk is already being processed by this thread, ignore
        ProcessedBy other_id done_mv | other_id == my_id -> do
            putMVar mv_s s
            pure ()
        -- Thunk is not yet processed, or processed by a lower priority thread, so process now
        _ -> do
            done_mv <- newEmptyMVar
            putMVar mv_s (ProcessedBy my_id done_mv)
            ts <- readMVar mv_deps
            mapM_ wait ts
            -- Mark kicked thunk as done
            _ <- swapMVar mv_s Done
            -- Wake up waiting threads
            putMVar done_mv ()

force :: Thunk -> IO ()
force t = do
    rt <- kick t
    wait rt
1 Like

Ocaml, which isn’t a pure language, so you can pull some neat tricks using mutable references, without making your whole code monadic

As a bonus, you gonna inflict so much pain on any Haskeller who tries to port your code :tada:

Well, not if I can wrap it up in a safe pure API as explored here - now the Ocaml programmer has the pain of usung the API correctly, but the Haskell programmer has it easy :slight_smile:

1 Like

I watched Kmett’s talk on propagators, and maybe that gives me a bit more terminology to describe what I think I’m doing here, as there are two parts:

  • LBool provides a monadic API to set up a propagator network, albeit a very simple one: All lattices involved are the two-point lattice consisting of “could be True” < “definitely False”, and the only propagators is implies. It has a decent implementation to solve the propagation (I think it may even be optimal in some sense, but that’s easy with a two-point lattice). It requires the programmer to pay attention to register all relevant edges in the graph using implies before reading off values using canBeTrue.

    I think this can be built on top of Kmett’s Cell, with a newtype around Bool to have a special Propagated instance with merge False _ = Change False False.

  • PBool builds on top of that, but wraps it in a pure and safe API (consisting of pFalse, all and unPBool). This way, the programmer can define the propagator network using simple functional programming, without dealing with monads.

I am more excited about the latter, and I wonder how it compares to other approaches. For example, the propagators library in Prop.hs seems to achieve something similar, using reifyGraph so that one can also use pure programs to define the program, but it seems it all has to happen within forward and backwards, so you can’t easily read off the cells as you go. So I hope I have hit a separate point in the design space… But I need to carve it out more precisely.

I should try to apply my construction to Kmett’s Cell data structure from the propagator, as an alternative to his Prop.hs, for a better comparison.

1 Like

Indeed, I could take my approach and apply it to Edward’s propagator code:


Started to put it into a a stand-alone library:

  • Data.Recursive.Propagator contains a very naive propagator library in IO
  • Data.Recursive wraps it up in pure functions, and documents safety requirements
  • Data.Recursive.Bool and Data.Recursive.Set provide safe APIs for some example types
  • example.hs shows what you can do with them

Sorry if this sounds somewhat negative in tone, but if you’re intending to publish this package to Hackage could you please reconsider the choice of module name “Data.Recursive”? That is a very general name and a very valuable part of the namespace. It might be better to take that name only after a package has become firmly established as widely used in the ecosystem.

(If the package is not to be published on Hackage then by all means use whatever names you like, of course!)

1 Like

I keep renaming the whole things, and I am happy to take suggestions. How would you call a type that allows recursive definitions?

I certainly want to publish it to hackage; I think the RSet API is useful in a fair number of applications, and that this is not just a toy experiment. Renaming the the module after it is established isn’t really a good idea either, though, so let’s better find a good name right away :slight_smile:

What would you think if all my modules are below Data.Recursive, but do not actually use Data.Recursive itself?

1 Like

Perhaps Data.RecVal or Data.RVal?

If this approach is as generic as it seems, there’s a possibility of it (eventually!) ending up as a basic feature of Haskell, like Maybe or the list - would it be worthwhile to keep it as a candidate package for as long as possible in Hackage, as a precaution?

…maybe not RVal: people trying to search for it would encounter many (mostly) unrelated matches.

Trying to find a suitable name for, well anything can be an “interesting” process - some time ago, I chose a name for a type class; as I dimly recall, it was the better part of two years before the search engines stopped suggesting alternative words with similar spelling. The inability to remove a misnamed package from Hackage only adds to the challenge: “once wrong, always wrong”

…before I forget again, the following paper could be describing something similar:

Based on the name, I thought it had a connection to this paper (same author), but it describes a different concept which seems to resemble your approach.

(As for “reinventing wheels”: these papers have only recently been made publicly available by the publishers - in the absence of easily-obtainable prior art, how are you supposed to know if something’s already been invented? ;-)

1 Like

Ah right, R-values already exist.

One more name idea: Data.Solidarity or Data.Solidary, because solidaire means interdependent or mutually dependent in French but even in English the word is associated with mutual connections between people.

Right now I am going with rec-def for the package name, and using various modules under the Data.Recusive.* namespace, without using Data.Recursive itself:

See for the haddocks (repo now at

Hopefully that’s non-squatty enough.

1 Like

Interesting! I have a hunch that you might be able to provide a similar top level interface, perhaps less efficiently, with neither IO nor unsafePerformIO nor thunks, etc? In particular, the most naive thing is to let the R monad record tuples of names and trees of possible assignments with results, and to have getR explicitly solve for a minimal set of satisfying assignments. So its really a sort of “labeled nondeterminism” monad in which you’re computing the least fixed point?

The immediate issue then presented is that any binding like let x = ... would need to explicitly also give an “x” as a name. So this sort of shows what’s going on is IO (or unsafePerformIO really) is being used to solve unique label identity. In turn this reminds me of “observable sharing” a la observable-sharing: Simple observable sharing and Observable Sharing

I seem to recall some more recent work on the topic, but not exactly where…

Note that R is not a monad, and that’s the point! It’s just a wrapper around the values, and you can pass them around like normal ones, and use them even recursively. And I strongly doubt you can implement a similar interface without unsafe code underneath.

The observable sharing work is highly related, but uses (if I remember correctly) even less savory tricks underneath, and is less composable, which is why I like the present implementation.

The library is now maybe ready for a first public release. There are tests, haddock reports 100% documentation coverage, and the overall structure is maybe fine. See for the full haddocks an in particular for an introduction.

Would you use this? If not, what would stop you from using it?

1 Like

I have now published rec-def on hackage, and wrote a blog post about how to use it:

More on the implementation follows later.