I see Wadler’s Law is popping off in this thread
Yes, Wadler’s law:
http://www.informatik.uni-kiel.de/~curry/listarchive/0017.html
…and yet people still keep asking for “niche-syntax” extensions!
But this thread is now well and truly off topic: most of the posts from here onwards:
https://discourse.haskell.org/t/maintaining-haskell-programs/7166/42
should go into their own thread. Alternately, like another recent thread:
https://discourse.haskell.org/t/hasura-migrating-to-rust/6620/114
this one probably should also be locked for the same reasons…
I think the various “off topic” tangents [1] in this thread have been the best part. Linear forums such as Discord are an old medium and letting them go off topic (on a leash) is always a fun way to socialize online. Obviously we need rules. But still
[1] if you can call them that. Discussing BlockArguments and software management and code formatters sure could be argued to be on topic in a thread about Haskell software maintenance
[1] if you can call them that. Discussing BlockArguments and software management and code formatters sure could be argued to be on topic in a thread about Haskell software maintenance
I’d say it’s at least loosely related … I’ve learned a lot from following through this thread!
Truth be told, projects I work on have known bugs only rarely and briefly. The hard work is adding features. Also, I had one very short argument about names of local bindings in my whole life.
But the issue is not with communication between people so much as it is about having to come up with names. Coming up with names, local or not, is creative work. I want in my life to either do excellent creative work or none at all. BlockArguments
let me have this because they free me from the mini game of matching parentheses and from some of the work of coming up with names. It is good for me.
I do find that names like m
and f
are bad. Cryptic identifiers do confuse me whenever I see them, be it in Haskell or in any other language. There is ambiguity, there is context dependence. For example, m
can be a monoid, or a monad, or a natural number. The worst offender here is module names. Why do people keep importing stuff qualified as a single letter? This pains me so much.
On the other hand, if you feel the need to document a big definition, splitting it into smaller definitions with the where
clause, as in your example, is the way to go. You seem to strongly disapprove of the way I like to write stuff… but I think both the do
and the where
have their use — the where
when you want to document, the do
when you want your sub-expressions to be anonymous. This is the same as with named functions and anonymous functions — both have their use.
That “goto-goto” example:
https://www.imperialviolet.org/2014/02/22/applebug.html
…which is a poignant example of what happens when syntactic sweetening goes too far - I dare say the majority of us have blundered into this ugly “idiom” enough times to wish vehemently that it didn’t exist.
To be honest, I cannot empathize because what I see is the use of imperative programming. Using imperative programming to denote pure computations is wrong — of course they will have issues like this. I do not think this example can be extrapolated to Haskell.
I do not understand the «sugar», «sweetening» line of argument either. Every computational problem can be solved in C, and the solution will be more portable than Haskell. Are all other programming languages a «sweetening» of C then? If so, then what you call «sugar» is a good thing. The task of a language designer is to make the language more commensurate with the human abilities. Overall it seems to me that an attempt at labelling a good thing with a bad word is taking place. (It is widely known that sugar is harmful.)
Speaking of Wadler’s law — I should be delighted to talk about semantics instead of the lexical syntax of comments but alas. So, yes.
TBH, with the mainstreaming of ML-style type system languages (Rust), as well as functional programming (Javascript’s functional dialect mainly, Typescript and other languages to a lesser degree, although powerfully typed FP is rare), Haskell’s syntax is a key advantage.
With IO / Monadic code, Haskell is comparable to Python, but in pure code, Haskell beats every language not in its family (Elm / Idris / Purescript) for readable and concise code.
To be fair to @kindaro, not needing to worry about naming local bindings is one of the things I love about Haskell. Between fmap, function composition, lambda case, etc. I always miss this ability when working in other languages.
But personally, if I do encounter this situation, I usually create local bindings, because if logic is complicated enough to warrant a do block, there’s usually a good name for it. And barring that, I’ve also just used boring parentheses
foo
( do
a <- m
bar a
)
$ do
b <- n
baz b
it would be awesome to see how a small production dialect, presumably a team under kindaro’s control, using assiduous block argument abuse, would evolve.
PureScript has BlockArguments. So if you took the largest PureScript codebase, would it satisfy your request? Or is there something else in PureScript that means it wouldn’t demonstrate the point the same way?
It is not all my ideas. I actually thought they are widely accepted, or at least widely known — except for the one where you write arguments to a function as do
blocks — this one I have not seen elsewhere.
It could be that Tikhon @Tikhon Jelvis invented the monad list notation back in 2022 — I saw it here: https://twitter.com/tikhonjelvis/status/1495808960103948291. Beautiful, is it not?
People do something similar with test suites since forever — while tasty
makes you write a list of checks, other frameworks, in Haskell as well as other languages, ask you to write a giant imperative block like so:
it ("does this", check_this);
it (does that", check_that);
It is not a far leap to try and build a tasty
test suite in the same way, as an imperative block — and then why not any other nested list thing!
The idea that identifiers are documentation is seen, for example, here: https://twitter.com/bitfield/status/980022149099421696, about 5 years ago.
-
kindaro:
[…] 2 new names […] creates space for trivial decisions. The need to make trivial decisions makes life harder.
-
brandonchinn178:
[…] not needing to worry about naming local bindings is one of the things I love about Haskell.
Perhaps you’ll both be happier with using:
-
FP :
Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs (starting from section 11 on page 7 of 29); -
…or its successor FL :
The FL Project: The Design of a Functional Language.
…no local bindings anywhere - enjoy.
I shouldn’t have read your comment, now I’m tempted to become a do
maximalist myself
One tiny benefit of using do
instead of $
is that, in HLS, the do
provides a natural place to show the type of a complex anonymous expression on hover. For example, in this code:
If we hover over the first do
, we see
If we had used (typeRep (Proxy @b))
or $ typeRep (Proxy @b) $
I’m not sure VSCode would have shown us the type of the typeRep (Proxy @b)
expression as a whole.
(I might be wrong though… Does HSL have a “show type of selected expression” feature?)
Am I wrong in thinking that this do
trick only work incindentaly because functions and list are monad.
What happen if one of the type changes, do you have to rewrite the whole expression using $
?
It rather reads to me that it is an incidental way of triggering the BlockArguments lexical rules… which is shocking. And not sure it reads well in English if abused…
example = fmap
do \x → x^2 + 1
do [1, 2, 3]
^— Can we squint our eyes and read this in English nicely?
I like BlockArguments for removing brackets, but I didn’t know this one was possible
Edit:
In fairness, we are just used to read “$” as deliminator, nothing readable about it neither. When I first started Haskell, it was shockingly alien too.
You don’t need a Monad
instance; you can write things like
baz :: Int
baz = do 5
I guess that if you don’t try to compose or sequence “statements”, the instance is not required at all.
In fairness, we are just used to read “$” as deliminator, nothing readable about it neither. When I first started Haskell, it was shockingly alien too.
Also, the “true” type of $
is kind of hairy:
($) :: forall (r :: GHC.Types.RuntimeRep) a (b :: TYPE r). (a -> b) -> a -> b
Upon reflection, it feels odd to deploy an operator with such a complex type out of mere syntactic convenience.
BlockArguments
Also, does the BlockArgument works for lambda too ? In that case the first do should be redundant.
Also, the “true” type of
$
is kind of hairy:($) :: forall (r :: GHC.Types.RuntimeRep) a (b :: TYPE r). (a -> b) -> a -> b
You are not alone:
Upon reflection, it feels odd to deploy an operator with such a complex type out of mere syntactic convenience.
Adding on to that… linear-base has its own “$” and it doen’t jive well with base’s “$”… So I might actually also be using do + BlockAguments then Only thing bother me is how to read it aloud.
I just tried, no, for obvious reason in hindsight.
So, no blockarguments:
example = fmap
(\x -> x^2 + 1)
[1, 2, 3]
example' = flip map
[1,2,3]
$ \x -> x+1
with blockarguments:
{-# LANGUAGE BlockArguments #-}
-- GHC says no
-- example = fmap
-- $ \x -> x^2 + 1
-- $ [1, 2, 3]
example' = fmap
do \x ->
x^2 + 1
do [1, 2, 3]
example'' = fmap
do \x ->
x^2 + 1
[1, 2, 3]
-- or other combinations
Actually I find example''
might be quite readoable.
Edit: fixed some compilation failures, and add one more example from maxigit. Make intentional multi-line lambda to highlight the fact that it can do that.
You shouldn’t need the do before the lambda
example = flip map
[1,2,3]
\x -> x+1
example2 = map
do \x -> x+1
[1,2,3]
work, but
example = map
\x -> x+1
[1,2,3]
Doesn’t compile …