Forking off a thread to continue the comment at Deepseq versus "make invalid laziness unrepresentable" - #3 by rhendric
Before responding to @rhendric’s comment I think it will help if I clarify my point of view, to avoid talking at cross purposes. The hill that I am willing to die on is that the correct way to fix the program in Optimiser performance problems is to make invalid laziness unrepresentable (MILU), and that force
/deepseq
and bang patterns are incorrect. I also extend this claim to a wide variety of similar programs. However, I do not claim that
-
force
/deepseq
has no uses - Bang patterns have no uses
- MILU is the way that all space leaks should be fixed (although I’d like to collect instructive such examples)
The reason that I believe it’s important to provide a very simple characterization of the solution to a wide variety of space leaks is that “laziness causes difficulty reasoning about time and space usage of Haskell programs” is a very common refrain from Haskell detractors in online discussions, and dispelling this myth is one of the most important hurdles to overcome if we want Haskell to be adopted more widely.
Now on to the comment in question. Thanks, @rhendric, for the detailed reply! I agree that it is very difficult to make invalid states unrepresentable, in general. In fact I wouldn’t be surprised if it was even non-computable. I’m less convinced that the same is true of making invalid laziness unrepresentable. It’s a far simpler problem space. However, I am aware of some sophisticated datastructures that seem to make use of laziness in an essential way, and have constraints upon how that laziness should appear at run time that are hard to enforce statically. Examples would be Okasaki’s, I don’t understand them well enough to say more.
I’d like to collect examples where MILU is not, even in principle, the correct solution. Is the Purescript warning logger such a case? Maybe! Let’s see.
I’d like to challenge this assumption, firstly because if there are good reasons then I’d value being enlightened about them, and secondly because it seems to be implying “assume we don’t want to MILU”, which defeats the point of the discussion. Perhaps the reason is that with a lazy expression tree, some traversals can naturally “fuse” into one overall traversal, improving performance. If that’s so it would be good to come up with a case where we can demonstrate the performance benefit of laziness.
To fix the problem of bigger-than-necessary memory usage, it seems like there a few options
- Make
MultipleErrors
NFData
and thenforce
it, in a carefully chosen place. - Make
tell
and all the fields of everything inMultipleErrors
and its dependencies strict. - Make a parallel hierarchy of
...Strict
types, and convert to them instead of usingforce
The current approach is 1.
As a brief aside, without 1, the example in question seems to cause a linear increase in memory consumption. Personally, I am primarily interested in cases where laziness causes space leaks of asymptotically more memory than expected. At least, I prefer to spend my efforts on those first, since they cause the most problems. In fact I’m doubtful whether we should include cases where the increase in space usage is only linear under the terminology “space leak”. Typical definitions, such as
“A space leak occurs when there exists a point in the computer program where it uses more memory than necessary” are broad enough to include such linear increases, but also broad enough to include many other things that I don’t think we would actually want to include.
Anyway, the terminology part isn’t too interesting, but I think it’s worth pointing out that the Purescript warnings logger example only seems to consume linearly more memory.
To return from the aside, I would say that a single, well-placed force
in the codebase doesn’t seem particularly risky. In fact I think it’s easy to understand exactly how it works. I would be more concerned if there were many uses of force
in a wide variety of places in the codebase, and it was unclear which of them were redundantly doing work that had already been done previously.
If you really must do that kind of then then @ekmett has a cool trick called Once
which combines deepseq
with MILU. It indicates in the type system the property that a second redandant deepseq
won’t actually do any work. That may be as inconvenient in practice as option 3 though.
I would look more carefully at why 1 is not possible. If there really is some essential benefit to making Expr
strict then I think that would be a very interesting discovery to write up! If there isn’t, then I’d just make it all strict. However, if I was short of time I’d just accept the status quo. One well-placed force
in the codebase doesn’t seem like an emergency.