[ICFP'22] Staged Compilation with Two-Level Type Theory - YouTube
Staging like Template Haskell but with a better theory and where quotations and splicing can mostly be automatically inferred. Unfortunately requires dependent types so it is unlikely to make its way into GHC in the short term.
…can now rejoice - presumably the progress towards “pure imperativity” will also include the addition of while (...) statements at the end of everydo-block.
Koka already had that functionality for a few years already […]
… while (...) statements at the end of every do-block?:-D
Blunders and chuckles aside, the presentation on implementing strong call-by-need is somewhat reminiscent of a recent (2016) exploration of head reduction: the reduction of expressions to head normal form, rather than just weak-head normal form - something to look at (more closely!) later, when the slides from that presentation are available…
Ah, Paul Downen is a coauthor of That exploration of head reduction that and he has also contributed to the work on introduction and elimination, left and right this year. It seems to use some of the same ideas, so that might also be interesting to look at more closely. He is also a coauthor of compiling with classical connectives which seems to share similar ideas.
This line of research has given us GHC join points, but it seems like it can give much more if a language is based on it from the ground up like Duo.
Speaking from experience, the Lean extensions to do-notation are actually quite nice to use in practice, and I don’t think that they deserve being made fun of.
If we accept that do notation is nicer in many contexts than manually writing out >>=, then why shouldn’t we also think it’s plausible that let mut can be nicer than a local invocation of runStateT, or that the early return statement can be a nice syntactic sugar for a local invocation of runExceptT?
I’d love to see some of these things in GHC myself.
However, do notation is basically just a way to avoid indentation and parentheses while this let mut is much more complicated. Many beginners already have trouble understanding the rules of do notation. And from personal experience helping a student write Koka code, I can say that the rules for local mutable state are much more difficult to understand. For example try to understand this issue.
Also, I think we should keep unidiomatic things hard to write. There usually is a good way to write your code using higher order functions without having to resort to local mutable state. So, it would mostly be useful for when you are translating imperative code (like the Lean people are doing) or if you are more familiar with imperative code than functional code. I expect that adding special syntax like that will prevent people from truly learning to appreciate higher order functional programming.
There’s a similar divide in array languages where Accelerate and Futhark use pointfree functional combinators while Dex uses imperative pointful for loops. I think I prefer the pointfree functional combinator style.
I’m just guessing here: there probably are a great many other “syntactic sugar crystals” from other languages that people would love to see in Haskell, too - when does this process of accretion stop?
We already support the syntax for some esoteric language called Do - shall we also add lisp, fortran, algol, ada, smalltalk, prolog, sml, java, and intercal-notation too? At that rate, people will soon be able to write Haskell using C++ syntax.
But if you really want to add more semantically-significant keywords - seq, but like its namesake definition in Miranda(R) and Lazy ML: properly sequential in it’s order of evaluation.
Wasn’t one of the proclaimed benefits of having higher-order functions not needing so much “syntactic sugar”, by abstracting out patterns of control as reusable definitions?
But if we have reached the limits of what can be practically refactored, perhaps it’s time for a proper macro system like Clean has, instead of e.g. growing the collection of built-in “synonyms”…
I agree that there are clearly lots of engineering issues to solve. But I don’t agree that the manual encodings of these things are easier to read and maintain - the semantics of actual mutability are difficult, but as a syntax for StateT, I think it’s pretty nice. But now we’re basically down to matters of taste and “don’t knock it 'til you’ve tried it”, which are usually not interesting discussions to have, or matters of engineering, error messages, and approachability, which are interesting but difficult and don’t make sense outside the context of a specific proposal.
I just really don’t like seeing work going on in other language communities, especially work that’s good enough for ICFP, get dismissed out of hand with sarcasm. I think it makes us intellectually worse off as a community to ignore the world around us.
Something I’d add is that the purpose and target audience and intended sort of code to be written in Lean is different than in Haskell. So often syntactic features that may make sense for one audience and set of purposes don’t make sense for another.
If I’m a haskeller working in my language of choice, I may prefer to write and think in an FP style. But if I’m using lean to encode and verify an algorithm presented imperatively, then using features that encode that algorithm as close to its imperative presentation/specification as possible may well be a much easier way to think.
What you’re saying makes sense, but it doesn’t correspond with the way I’ve used those features in Lean. They really do replace local uses of runStateT and friends with something that’s easier for me to read and reason about. Sometimes local, encapsulated mutation or early loop termination is just an easier way to describe something than re-encoding into another form. It’s far from always the case that I’d reach for those, but sometimes they’re just nice to have.
I’m not arguing for manual examples like those presented in the paper. Instead, you should write proper functional code. For example the paper presents a “succinct” locally imperative implementation of findM:
def List.findM? (p : α → m Bool) (xs : List α) : m (Option α) := do
for x in xs do
let b ← p x
if b then
return some x
return none
But in Haskell you can write it as an even more succinct composition of more and less common combinators:
findM :: Monad m => (a -> m Bool) -> [a] -> m (Maybe a)
findM p = runMaybeT . asum . map (MaybeT . guardedM p)
where
guardedM :: Functor f => (a -> f Bool) -> a -> f (Maybe a)
guardedM p x = (x <$) . guard <$> p x
I would actually write the example like this, using Lean’s equivalent of Idris’s !-notation:
def List.findM? [Monad m] (p : α → m Bool) (xs : List α) : m (Option α) := do
for x in xs do
if (← p x) then
return some x
return none
As a fairly seasoned Haskeller, I find the combinator version fun, but I don’t think it’s actually easier to read, write, or understand than this version here. Nor do I think that it provides any more powerful opportunities for insight or any additional reasoning principles that would make me prefer it. I’m not saying that style of code never has its place, but this particular example is one where I think the extended do-notation is really great.
But now we’re back to individual idiosyncrasies as well as things about which reasonable people can disagree
But perhaps rather than waiting for someone championing a GHC proposal, I should simply move on to Koka
But in Haskell you can write it as an even more succinct composition of more and less common combinators:
I’d totally take the Lean definition of findM. It’s so much more elementary! I can grasp its definition in 5s, whereas that’s the time it would take me to figure out what asum does to a [MaybeT m a], where each element is some guardedM. No thanks, I like early return. I also like the Idris notation.
Perhaps it would just be easier to have miranda or haskell-notation in the likes of Java or C++: mountains of [observable] effect-free code ascending from the vast wastelands of effect-addled imperativity with all the buzz reserved words and associated “semantics” that [almost] everyone else just can’t get enough of…
MaybeTis early return (going back all the way to Moggi’s work on monads and monad transformers.)
This is going to sound very ivory tower-y, but I am not convinced that your preferences are based on anything more than just familiarity with imperative code. I would love to be proven wrong about this, but I fear that would require a very long and expensive research project.
And here’s a much more concrete benefit of the functional style: you can change the behavior of the function to return all possible results, instead of just the first one, by changing MaybeT to ListT (or monadic streams like from the vector-stream package although that has no alternative instance yet, I believe.) That’s a trivial change.
If you want to do that in imperative style then you either have to basically write a completely different function, or extend the language even further with something like Python’s yield.
I’ve been thinking more about it; you can actually already get very close with just a custom transformer (actually it is just ExceptT with different names):
findM :: Monad m => (a -> m Bool) -> [a] -> m (Maybe a)
findM p xs = runEarlyT do
for_ xs \x -> do
b <- lift (p x)
when b (earlyReturn (Just x))
pure Nothing
So really the only difference with the new Lean syntax is that it requires two extra functions: runEarlyT and lift.
Click to expand the full implementation:
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE BlockArguments #-}
import Control.Monad ( ap, when )
import Control.Monad.Trans.Class ( MonadTrans(..) )
import Data.Foldable ( for_ )
newtype EarlyT r m a = EarlyT (m (Either r a)) deriving Functor
instance Monad m => Applicative (EarlyT r m) where
pure x = EarlyT (pure (Right x))
(<*>) = ap
instance Monad m => Monad (EarlyT r m) where
EarlyT m >>= k = EarlyT $
m >>= \case
Left x -> pure (Left x)
Right x | EarlyT y <- k x -> y
instance MonadTrans (EarlyT r) where
lift :: Monad m => m a -> EarlyT r m a
lift x = EarlyT (fmap Right x)
earlyReturn :: Applicative m => r -> EarlyT r m a
earlyReturn = EarlyT . pure . Left
runEarlyT :: Monad m => EarlyT a m a -> m a
runEarlyT (EarlyT m) = m >>= \case
Left y -> pure y
Right y -> pure y
findM :: Monad m => (a -> m Bool) -> [a] -> m (Maybe a)
findM p xs = runEarlyT do
for_ xs \x -> do
b <- lift (p x)
when b (earlyReturn (Just x))
pure Nothing