I’d like to challenge this assumption, firstly because if there are good reasons then I’d value being enlightened about them, and secondly because if there aren’t good reasons then it seems to be implying “assume we don’t want to MILU”, which defeats the point of the discussion.
Does that resolve the remaining issues related to that sentence?
This particular issue is quite subtle. To reiterate and clarify further, my main goal of promoting MILU is to establish a shared understanding of a characterization of programs which largely eliminates the performance and memory usages downsides of laziness whilst largely preserving the compositionality benefits. In particular I want to dispell the reputation that Haskell has that it is hard to reason about performance and memory usage because Haskell is a lazy language. Here are some examples from Algolia’s Hacker News search:
I often hear about performance/memory usage pitfalls with Haskell laziness
I believe this reputation significantly reduces the chance of curious people trying out Haskell, and improving this reputation would have massive benefits for the community as a whole.
Now, regarding the part of my comment that you quoted above, reputedly, the performance and memory usage of Haskell code is hard to reason able because of laziness per se. I am trying to challenge this by responding “no, it’s because of inappropriate laziness that you didn’t want and weren’t benefitting from anyway”.
So I don’t think I’m trying to split a hair, but I am trying to thread a needle. On one side are non-Haskellers who think that being lazy by default makes Haskell essentially unusable. On the other side are (a subset of) Haskellers who think that challenging anything to do with laziness is blasphemy (I’m not referring to anyone engaged in recent discussions here). I’m trying to take a middle path by suggesting that the whole discussion can be finessed by appropriate design of data types.
Yes, fair enough.
On this specific point, I developed strict-wrapper: Lightweight strict types to make exactly this kind of thing easier. By its nature there shouldn’t be any missing API functions, because the whole point is lightweight conversion to the lazy version. (I know for a fact there are missing instances though. I should add them.)
There’s a lot left unspecified there. Who are these beginners, how are they learning, what do they have to delay learning if they learn about deepseq instead (if anything)? It’s very hard to make a precise claim without knowing much more context. Instead, here’s a precise situation that I hope is easier to debate:
Yes, Data.Vector can store thunks. Use Data.Strict.Vector instead.
and the discussion about space leaks and laziness had ended there.
I claim yes, because it’s a simple solution that extends to the vast majority of situations that new users will encounter and it avoids overwhelming them with “an eye-watering number of methods” that will just confuse and disappoint them (and, I believe, lead to attrition from Haskell).
Regarding the Purescript situation, using force seems better than doing nothing, and also seems better than an invasive investigation if there aren’t the resources to perform that investigation. This suggests that it’s good for Haskellers to learn about and use force sometimes. Where we draw the line is open for debate. I made the claim above to help the debate.