I don’t care if a language is hard to understand if it promises to solve some problems that easy to understand languages give us. I’ve been promised that the impossibility of changing state in Haskell (and other functional languages) is a game changer and I do believe that. I’ve had too many bugs in my code related to state and I totally agree that reasoning about the interaction of objects in OOP languages is near impossible because they can change states, and thus in order to reason about code we should consider all the possible permutations of these states.
However, I’ve been finding that reasoning about Haskell monads is also very hard. As you can see in the answers to the question I linked, we need a big diagram to understand 3 lines of the do notation. I always end up opening stackedit.io to desugar the do notation by hand and write step by step the
>>= applications of the do notation in order to understand the code.
The problem is more or less like this: in the majority of the cases when we have
S a >>= f we have to unwrap
S and apply
f to it. However,
f is actually another thing more or less in the form
S a >>= g, which we also have to unwrap and so on. Human brain doesn’t work like that, we can’t easily apply these things in the head and stop, keep them in the brain’s stack, and continue applying the rest of the
>>= until we reach the end. When the end is reached, we get all those things stored in the brain’s stack and glue them together.
Therefore, I must be doing something wrong. There must be an easy way to understand ’
>>= composition’ in the brain. I know that do notation is very simple, but I can only think of that as a way to easily write
>>= compositions. When I see the do notation I simply translate it to a bunch of
>>=. I don’t see it as a separate way of understanding code. If there is a way, I’d like someone to tell me.
So the question is: how to read the do notation?