State monad - memory exhausted

I don’t know, it just seems too important to be relegated to some third party tool.

The fundamental rule is that a part of your programming process is reasoning about laziness. Every time you apply a pure operation to a piece of data, it is applied lazily and will only matter once that piece of data is demanded. You don’t have to be painstacking about it, unless you want the maximum bang for your buck, but if you do no work in this department you’re destined for situations like the one you described.

On a broader scale, try to think about the problems you solve through a functional lens. Instead of writing a sequence of operations that perform side effects in place, figure out what are the inputs in your system, what is the state, and what you need the output. You do not have to use monad cakes to solve problems, I would even argue they’re harmful because they promote imperative programming in a place that does not require it (and everything that does could instead use IO directly).

1 Like

On a broader scale, try to think about the problems you solve through a functional lens from a (mathematically) functional point of view.

  • by breaking the original problem down into sub-problems,

  • then repeating that process on those sub-problems,

    • until those (sub-sub-sub-…) problems are simple enough to solve directly.

Top Down Design (2017)


You do not have to use monad [layer] cakes to solve problems […]

…unless:

  • someone else gives you a serving of monadic “layer-cake” as a dependency,

  • and there’s no way to contain it (in the way runST can for monadic ST s a terms).

Then you can expect much of your code to also be passing along serves of monadic “layer-cake”. If you find that annoying…you are not alone:


I would even argue they’re harmful because they promote imperative programming in a place that does not require it (and everything that does could instead use IO directly).

…which most tend to do anyway, having no great interest in the “slicing” most effects-systems offer: ultimately such systems also serve monadic “layer-cakes” (with 2N combinations, for N individual “layers” ).

Thank you, I see your point that it is a mindset, rather than a rule.

But consider this. Taking heavy inspiration from the Well Typed blog link above, say I am in an organisational setting. And say a PR lands on my desk with a small change where, in reviewing this change, I am guilty of not doing a thorough top down analysis of the problem, and approve the seemingly innocuous change, which passes CI, but which as you’ve no doubt guessed by now, leads to a horrible server crashing memory leak.

Do you think it is acceptable, that ghc provides no warning (albeit noisy), of this situation occuring?

We’re all about type safety, but I wonder have we paid enough attention to memory safety in ghc.

1 Like

Relative to how often this issue arises, I would say the status quo is fine. I would prefer seeing this problem solved from the ground up (being able to just say a certain piece of data is always strict), instead of bringing in a whole host of speculative tools.

I do not advocate for production-grade Haskell, unless the company knows what it’s doing. My answer to “what if someone makes a stupid change and it sets the production environment ablaze” is “just don’t use Haskell in production”. There has to be a better way to structure code that doesn’t require sledgehammer solutions.

And, of course, it’s not just a production problem, the entire ecosystem is like this. Libraries choose to hide laziness from the end user, so the only time you come across it is when you find out your little program takes 8Gb of memory to run, at which point your forum question yields a bunch of “that’s why I use this cool library to solve the language for me” answers.

Yes, that’s right.

nothunks was the inspiration for my article Make Invalid Laziness Unrepresentable. After I learned about that library I thought “after we’ve checked a value for thunks it would be nice to indicate that in the type, so that we know we don’t have to ever check it again” (I was imagining introducing a ThunkFree newtype wrapper). Then I realised that the simpler approach was to forbid thunks from occurring in the first place, through a better definition of the type in question!

Yes, that’s why you should make invalid laziness unrepresentable. Then you don’t have to rely on any tool. The impossibility of thunks just becomes part of your program.

I don’t think this situation is acceptable, but it’s not GHC’s fault. The author of loopM chose to allow thunks to build up in the state that is passed between iterations (similar to foldl). Why should GHC object to that? Maybe that’s necessary for some applications. If the author wanted different behaviour he should have implemented it differently. If the user wanted different behaviour she shouldn’t have used loopM. It’s not GHC’s job to decide! (This is why I have opened the discussion Why shouldn't I make my monads "value strict"?)

So what exactly do I find unacceptable? Our ecosystem has 100 laziness footguns, foldl, modifyIORef, Control.Monad.Trans.State.Strict.modify, all of Control.Monad.Trans.State (i.e. not .Strict), all of Data.Map.Lazy, … . It is so easy to create catastrophic space leaks using them that they should only be used by experts in very specific circumstances (generally to eke out performance). But we don’t do a very good job of educating users about this in general, nor of finding way of discouraging use of the footguns through other means.

7 Likes

Just a few notes while skimming the thread:

Tangentially, I never understood why GHC can’t rewrite functions to go form by itself.

You can enable it manually with -fstatic-argument-transformation (although for some reason it doesn’t seem to work in this case) . If the function is not inlined then it will a have to allocate a closure for that go function. @sgraf is working on improving it though. I believe the latest idea was to only apply this transformation if it makes it possible to inline the function.

Yes, there’s #18962: SAT should only influence the unfolding · Issues · Glasgow Haskell Compiler / GHC · GitLab and the prototype in !4553: WIP: SAT: Attach SAT'd definition as INLINABLE unfolding · Merge requests · Glasgow Haskell Compiler / GHC · GitLab. Sadly, I continually lack the time and priority to fix the (conceptually unrelated) regressions it introduces. If I were to implement GHC on a green field, this would definitely have been the way I’d have implmented SAT in the first place.

Kind of, but I think in most cases the thunks are forced rather quickly and no leak occurs. So you’d get a lot of false positives. Edsko de Vries from Well-Typed has written the nothunks library which can give warnings if there are thunks in your code: Being lazy without getting bloated - Well-Typed: The Haskell Consultants .

Edit: I confused nothunks with the purported noupdate primitive/Edsko’s dupIO package. nothunks seems like an adequate runtime verification procedure, but a static anlysis would far more helpful. I’ll leave the following 2 paras untouched, but bear in mind that they relate to omitting of update frames with noupdate.

Note that in this case, the closure of the thunk retains the chain of + 1s. I’d hypothesize that omitting the update frame here would not improve anything because that thunk is never evaluated before memory runs out.

And if it were evaluated multiple times, I’d rather have updated to a I# 9000# than retain the chain of closures for the next eval… That would be an example where nothunk/noupdate would make things worse.

Do you think it is acceptable, that ghc provides no warning (albeit noisy), of this situation occuring?

I don’t think it’s acceptable, but I wouldn’t pin it on GHC, either.
But perhaps a linter like hlint could implement a pass that warns about these situations, or flags places where a thunk/data structure is retained over a potentially very long function call.
Alas, my interests are as expansive as my time to pursue them (e.g., during my PhD) is finite.
Perhaps someone else would be interested in writing such a static analysis; I think we could get really cool results quite fast. Definitely worth a publication.

3 Likes

Why is that? (Post must be 20 chars).

It’s fair to expect GHC to produce warnings if it fits into its compilation pipeline. But above I sketched an entirely new static analysis that is not relevant to compilation in any way, yet requires its own pass (multiple, probably) over the whole program. There’s no reason to burden development and every run of the compiler with this overhead; rather I’d expect some kind of static analysis tool to be run (perhaps nightly) by CI. That’s good: Such a tool (hlint, stan or a Core plugin) is not subject to the same stability requirements as GHC.

GHC has (semantic, hence non-trivial) analyses which are non-essential to compilation such as pattern-match coverage checking. But that analysis fits quite neatly into the structure of the desugaring pass. Even then, for some complicated test inputs you can observe a significant drop in compilation performance entirely due to coverage checking. I suggest we do not add to that.

…amongst others, here and elsewhere! But as it happens, maybe something can be done about the weather :

If the observations made in those articles:

  1. can be verified for the halting problem,

  2. then extended to Rice’s theorem,

it could be possible to “have it all”. Otherwise:

…or revitalising Robert Ennal’s previous work:

1 Like

So performance reasons essentially? Comparing to C, gcc -fanalyzer is expensive, but opt in.

https://gcc.gnu.org/onlinedocs/gcc-13.2.0/gcc/Static-Analyzer-Options.html

Yes, perf and stability. Personal opinion: Contributing to GHC, fulfilling as it might be, has lost quite a bit of momentum in recent years due to maturity of the project, multiplied with the churn introduced by such a large code base.

Edsko de Vries from Well-Typed has written the nothunks library which can give warnings if there are thunks in your code:

nothunks is great and I’m happy to have it, but IMHO its using the typeclass system to overcome a feature deficiency in GHC. If I could have my way, I would transform nothunk uses into Unlifted types. This way the my types just feel cleaner because Foo :: a isn’t masquerading as something (a thing that can be a thunk, and therefore includes \bot) as something that it isn’t (a thing that does not contain \bot as a value). So I would find this approach cleaner because I have type level witnesses instead of typeclass constraints that serve as witnesses. I guess I should help @jaror improving the ergonomics of Unlifted.

What about extending the use of strictness annotations to type signatures?

foo :: !T
foo = ...
1 Like

Would you expect this to type-check?

xs :: [!Int]
xs = map (+ 1) [error "blah"]

If so, what code would you generate for Data.List.map?

Essentially, ! in type signature is just syntactic sugar for a zero cost coercion into UnliftedType kind, and it is not entirely trivial to embrace that in our compilation pipeline.

1 Like

[…] what code would you generate […]?

Something like the code that presumably would be generated for:

xs :: [Int]
xs = map (+ 1) [error "blah"]

when using -XStrict in GHC.


Well, you could try approaching the problem from the “other direction” :

Lazy Evaluation for the Lazy: Automatically Transforming Call-by-Value into Call-by-Need (2023)

But map has not been compiled with -XStrict, so it won’t evaluate the list cells it returns.
Hence [!Int] (which to me says “When you evalute to (x:_)::[!Int], then x is also evaluated”) would be very misleading, because that is not at all what is guaranteed by what is returned by map.

The solution is that you need two versions of map: one that you call when the list cells are “lazy” (lifted) and one in which the list cells are “strict” (unlifted). With that in mind, map is actually pretty much an overloaded function. Of course, we wouldn’t want to pay for overloading, so we’d probably specialise every “levity polymorphic” function. But map has type forall a b. (a -> b) -> [a] -> [b] and we so far have only discussed levity polymorphism in b. What about levity polymorphism in a? That would lead to 4 specialisations for the same map function. Fortunately, it is OK to simply presume that a is lifted and insert an eval just in case (think of UnliftedType as a subtype of LiftedType with a zero-cost coercion), so 2 specialisations suffice, but that is not true in general and you can see why this doesn’t scale.

All that to say: It’s not as simple as “proposing” ! in types; that’s merely a piece of syntax without a specification of its non-compositional semantics.

Incidentally, we could make [] levity polymorphic today, e.g. [] :: forall (l::Levity). TYPE (BoxedRep l) -> LiftedType. This l defaults to Lifted anywhere it can’t be inferred, that would mean most written code out there should keep compiling. So actually we could write [!Int] as [Strict Int], where Strict :: LiftedType -> UnliftedType such as in Data.Elevator. So !a could just be syntactic sugar today, for Strict Int. But that does not help, because we can’t reuse all the existing definitions working just on the lifted variant of map.

While some functions, such as foldr, can easily be made levity polymorphic in the list element type parameter without requiring separate specialisations (foldl' even in both type params, I think, #15532: Relaxing Levity-Polymorphic Binder Check for Lifted vs Unlifted pointers · Issues · Glasgow Haskell Compiler / GHC · GitLab), in other cases such as map we can’t get around to generating twice the amount of code (or suffer from unknown calls to a dictionary carrying around the implementation of seq (unlifted) / flip const (lifted)). I argue that we’d require opt-in from the user to do so via a change in the type signature (map :: LevPoly l => forall a (b::TYPE (BoxedRep l)). (a -> b) -> [a] -> [b]).

2 Likes

But map has not been compiled with -XStrict, so it won’t evaluate the list cells it returns.

It shouldn’t need to - the call to map would be “strictness-lifted” by the implementation implicitly to provide that strict version of map, in a similar fashion to how e.g. the strict Complex a constructor (:+) is really a lazy constructor which has been “strictness-lifted” through the use of extra calls for evaluating its components.


All that to say: […]

…people are being annoyed by “type acrobatics” :

  • Monad, I Love You […] (2022)
    https://www.youtube.com/watch?v=2PxsyWqZ5dI

Wouldn’t that mean every function needs to have 2args versions for each choice of levity downstream? Isn’t proper levity polymorphism support a way more straightforward solution at that point?

…not according to the OP here, and others:


Wouldn’t that mean every function needs to have 2args versions for each choice of levity downstream?

That’s more of a “provide-it-all-now” solution. I’m thinking more “provide-only-as-needed”, where strictness annotations would be expanded as they are encountered by adding the extra calls needed to evaluate (sub)terms.