I haven’t been able to return to this thread because it prompted me to read more materials before I can confidently reply with something useful.
But I still haven’t managed to read them. Instead of letting this thread cold, I’ll try my best to summarize my understanding of what has been mentioned so far.
Monadic Region
The techniquestatically guarantees that neither a file handle nor any computation involving the handle can leak outside of the region that created it. Therefore, the handle can be safely closed (and its resources disposed of)
Bluefin
Per @tomjaguarpaw, all effect systems implement “monadic region.” Bluefin is exploring linear types to prevent resource leaks. An example was attacked.
scoped-codensity
This implements a Monad just like Codensity
from the kan-extensions
package, but it uses a skolem trap just like the ST s
monad to track resources allocated in the monad.
(See: mangoiv's webpage - codensity)
As far as I can tell, the motivation is to make the code use a lot of the “withResoruce” pattern to take advantage of the Haskell monad syntax and make it appear less tedious to write.
I have been looking for a motivational example code, the best I can find is its test case: scoped-codensity/test/Main.hs at main - mangoiv/scoped-codensity - Forgejo: Beyond coding. We Forge.
Lastly, here is a bit about my motivation for this topic.
What I have achieved in my project is to implement a “graded/indexed” monad, with the help of QualifiedDo, the code using this monad looks like just regular code in do notation.
However, under the hood, all data can be bound to a version, where the monad law guarantees that the version number can only stay the same or grow. This version number aims to track the changes in a “world” whose states evolve linearly. Anything that triggers the world to change state should be a monadic operation that advances the version number for future operations.
This makes using old data for newer state of the world a type error. I dubbed this “Linearly Versioned Monad (LVM)”, and here is the code of its bind operation of LVM:
-- ...
-- _Additional Law of Linearly Versioned Monad_
--
-- Additionally, each 'LVM' carries a proof of monotonic growth of data versions, denoted as such @m [va \<= vb]@. Then
-- we have:
--
-- 1) Law of linearly versioned monad: @ ma [va \<= vb] \>>= mb [vb <= vc] ≡ mc [va <= vc] @
(>>=) :: forall ctx va vb vc a b. ()
=> LVM ctx va vb a ⊸ (a ⊸ LVM ctx vb vc b) ⊸ LVM ctx va vc b
ma >>= f = MkLVM \ctx -> let !(aleb, ctx', a) = unLVM ma ctx
!(blec, ctx'', a') = unLVM (f a) ctx'
in (Dict \\ leTrans @va @vb @vc \\ aleb \\ blec, ctx'', a')
Note that the bind operation requires the monadic operation to provide a proof of vb >= vc
so that it can maintain the va >= vc
for the monadic output.
This is an example operation that grows the version (not the v + 1
in the type signature), it updates a storage location in this “world”:
sput :: forall v r a. (YulO2 r a, ABIWordValue a)
=> P'V v r ADDR ⊸ P'V v r a ⊸ YulMonad v (v + 1) r (P'V (v + 1) r a)
In risk of too much of details unexplained, full code can be found here (LVM), here (LVM’s Combinators), and here (some real monadic operations).
To wrap up, my motivation is to find prior art for the approach I have taken so far, and the monadic region was the first clue I have found.