The Haskell Unfolder Episode 31: nothunks

Will be streamed tonight, 2024-08-14, at 1830 UTC, live on YouTube.

Abstract:
Debugging space leaks can be one of the more difficult aspects of writing professional Haskell code. An important source of space leaks are unevaluated thunks in long-lived application data; in this episode of the Haskell Unfolder, we will see how we can take advantage of the nothunks library to make debugging and preventing these kinds of leaks significantly easier.

Full announcement here: The Haskell Unfolder Episode 31: nothunks - Well-Typed: The Haskell Consultants

15 Likes

Copying over my comment from Reddit


Thanks for this talk and thanks for the nothunks library!

One thing that confuses me about nothunks is that it does at run time what could be done at compile time (though I take the point that the talk emphasizes that its proper use is at run time of tests). As a thought experiment, what would it look like if we used th-deepstrict for this purpose instead? Well, I think at the definition point of UserInfo we’d write

$(assertDeepStrict =<< [t| UserInfo |])

and it would tell us that the fields of UserInfo are not strict. We’d then rewrite to

data UserInfo = UserInfo {
    lastActive :: !UTCTime,
  , visits :: !Word
  }

and it would tell us that UTCTime is not deep strict, so we’d rewrite to

data UserInfo = UserInfo {
    lastActive :: !(Strict UTCTime),
  , visits :: !Word
  }

and then it would tell us that UserInfo is indeed deep strict. We’re done! We’ve (made invalid laziness unrepresentable](make-invalid-laziness-unrepresentable).

N.B. Strict is from the strict-wrapper library, but I haven’t actually added a UTCTime instance yet. I should!

Forbidding thunks statically seems much better than checking for them dynamically. I suppose one benefit of nothunks is that we might want a data type to be able to contain thunks and only require them to be absent in certain situations, but that seems of marginal utility. Is there some other reason the dynamic analysis is preferable to the static one?

cc @TeofilC

6 Likes

I agree that a static analysis would be nice. You’d have to think about type parameters (“the type forbids thunks everywhere, provided that the type you pass in does also”), and I guess also higher-kinded type parameters. It’s also not entirely clear to me how it would find out (in your example) that UTCTime is not deep strict; that information would need to be available somewhere, but I’m sure it’s possible to engineer that problem away :slight_smile: I think it’s an approach worth exploring, but I don’t think it’s entirely trivial.

1 Like

Isn’t that covered by just setting all type parameters to ()?

Interesting, I hadn’t thought about that. I might be missing something obvious, but I feel I’ll have to give it more thought to understand what it even means!

Well, th-deepstrict does it!

{-# LANGUAGE TemplateHaskell #-}

import Language.Haskell.TH.DeepStrict
import Data.Time

$(assertDeepStrict =<< [t| UTCTime |])
test28.hs:6:2: error:
    Data.Time.Clock.Internal.UTCTime.UTCTime
is not Deep Strict, because: 
Data.Time.Clock.Internal.UTCTime.UTCTime
  con Data.Time.Clock.Internal.UTCTime.UTCTime
    field Data.Time.Clock.Internal.UTCTime.utctDay is lazy
    field Data.Time.Clock.Internal.UTCTime.utctDayTime is lazy
  |
6 | $(assertDeepStrict =<< [t| UTCTime |])
  |  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I think it is almost entirely already solved, and just needs a little polishing :slight_smile:

1 Like

Indeed as @tomjaguarpaw says my th-deepstrict library already implements this.

Currently th-deepstrict only works with fully concrete types (Maybe Int rather than Maybe a), but I’m planning on eventually implementing something like what you sketch. It’s a bit more work to implement, and in practice I haven’t found myself needing it, so it might be a while before I get to it.

We can access this information through TemplateHaskell’s reify interface. It can show you the definition of any datatype in your dependency tree, even if the constructors weren’t exported.
We don’t have to specify the strictness properties of any types at all (though some datatype might need manual overrides because Array# etc are not deep strict), and things will just be inferred. Eg, even builtin things like Int don’t need to be specified to be deep strict, we can see that it’s deep strict because it has one field with an unlifted type.

I’m happy to answer any questions about it

1 Like

Sounds like I should take a look :slight_smile: Perhaps even worth an episode in its own right :slight_smile:

3 Likes

I would say so. Very useful work from @TeofilC!

3 Likes

Is there also an explanation of the advantages of nothunks over options like -XStrict?

I think static guarantees are nice, and if they’re correct by construction, that’s even nicer.

I have a number of reasons though why I think that both th-deepstrict and nothunks have their place, rather than th-deepstrict being strictly (sorry) superior:

  1. It seems th-deepstrict is only useful in the case that I really want a value being completely strict. Personally, I think that’s unrealistic in larger settings. I like about nothunks that I can choose exactly what kind of invariant I want to apply. I see no reason that th-deepstrict couldn’t allow this too, and perhaps it’s actually possible to do so by using assertDeepStrictWith, but I’m not sure.

  2. There’s still a difference between a type being (deeply) strict and a value having no thunks. The latter can be true even if the former isn’t. And in order to plausibly establish the former (without reintroducing a need for testing via something like nothunks), the only plausible way may be to apply a potentially very costly deepseq in various places, especially if we do not control the data types in question because they come from other packages.

Regarding the requested comparison with -XStrict:
The Strict and StrictData language extensions are primarily switching defaults. I find Strict too invasive and would never use it. StrictData is in principle fine, but I don’t think it really solves the problem as it only switches the syntactic default. You have to think about the exact invariants you want to hold for your datatypes, and at some point you might want to use additional tools in order to independently check you got them right.

2 Likes

Yes I definitely agree these approaches are complimentary.

You can use th-deepstrict in a scenario where you want some lazy parts to your datatype. What you can do is instead of using assertDeepStrict, you can dump the output from isDeepStrict to a file and set up a golden/snapshot type test. Then you can keep whatever laziness you desire, but get a CI failure if you accidentally add an unintendedly lazy field (eg, by using regular lazy Maybe). This is also helpful when you want to make a large datatype less lazy over time.

You can also use the *With functions to override the inferred strictness of some datatypes.

1 Like

So, if I shouldn’t use nothunks in production, what should I use?
How should I monitor memory?

If a memory leak crashes the server, should I pepper everywhere with nothunks and CPP flags, deploy it in prod and hope I get a hint?

If the map from the video was actually fully evaluated to dump it and query it, then the thunk chasing wouldn’t matter.
How do I isolate the thunks that matter from the ones that don’t, for the same type, in different functions, during runtime in production?
Can I annotate functions with debug symbols so that they remain even when inlined so I can go make strict just the things I want?

While it’s a good thing to make invalid laziness unrepresentable, only experience lets you foresee which function will benefit from strict types. You can’t go around deepseqing every type just in case the previous laziness becomes invalid as you reuse the type for a new function. And even then you can deepseq an infinite stream and explode anyways.

Regarding dealing with space leaks, my advice is the following:

  1. Make invalid laziness unrepresentable. That is, design your types to be free of space leaks in the first place. In the same way you simply wouldn’t use strings "TRUE" and "FALSE" to represent booleans, don’t use data MyPair = Pair Int Int to represent a pair of fixed-precision integers. When evaluated it’s not a pair of evaluated fixed-precision integers! It’s a pair of (either a fixed-precision integer or thunk (potential space leak)). Instead, use data MyPair = Pair !Int !Int.

    Similarly, don’t use data MyPair2 = Pair !Int !(Maybe Int). There’s a thunk (potential space leak) hiding in that Maybe. Instead use data MyPair2 = Pair !Int !(Strict (Maybe Int)). (See the strict-wrapper library.)

  2. Use th-deepstrict to confirm that the data types that you are defining don’t hide space leaks.

  3. Only use the space-leak-free versions of various library functions. This is a bit more awkward, because you have to know which to avoid. For example, you should only ever use foldl' not foldl, Data.IORef.modifyIORef' not Data.IORef.modifyIORef, and Control.Monad.Trans.State.modify' not Control.Monad.Trans.State.modify.

    (Maybe one day this knowledge will be encoded into stan or some other static analyser, so everyone doesn’t have to just remember it.)

  4. If you come across a space leak nonethless, use GHC’s heap profiler with retainer profiling. That should give you a good idea of which data type the space leak occurs in. Then, if it’s your data type, you can go back to 1 to fix it, perhaps using nothunks to help diagnose. Once fixed use th-deepstrict to ensure that the data type doesn’t regress. On the other hand, if the space leak is in a library you’re using then it’s more tricky. I guess file a bug report upstream, for example my patch to megaparsec.

I don’t think I really follow this. It’s not a question of “functions benefitting from correct types”. It’s a question of enforcing invariants on your data types (as @kosmikus explains in the linked video). If there’s no need for laziness in your data type then enforce its absence by making invalid laziness unrepresentable and it will be space leak free! The point of making invalid laziness unrepresentable is that deepseq becomes simple the same as seq. There is no longer and deep laziness to seq! deepseq is a massive anti-pattern. If you find yourself using it then something has likely gone terribly wrong. (For a discussion around the boundary between legitimate deepseq use and anti-pattern use, see Deepseq versus "make invalid laziness unrepresentable").

6 Likes

What I’m saying is that invariants may change.
Imagine I have:

data Foo = Foo Int Int

cond x (Foo a b) = if x then a else b

f x = cond x $ Foo someThing veryExpensiveComputation

Now imagine I forget this amid a sea of functions.
I keep working, and in a few months, I introduce the problem in the video, with Map.
There’s a Map of Foos somewhere.
If I make Foo strict to fix the thunks in the Map problem, I’m degrading the system somewhere else. I introduced a regression.
It’s the same as enabling -XStrict to just fix the leaks: someplace somewhere really needed laziness.
You could say that I could assert laziness and seqness everywhere. But the cases where you use both, and seq on demand, means you either store surprises for the future or have to duplicate your business types into Strict and Lazy. With all the duplication and conversion between one or the other that the situation entails.

Rather than duplicating types and making and maintaining strict logic and lazy logic, I figure laziness as a default, and using profiling on production environments to seek out what places to seq and assert can go a long way. If you have a good profiler and tools to debug it.

1 Like

Oh yes, absolutely! If you’re using laziness in an essential way in your data type then you can indeed not change that and expect everything to work fine. I strongly suggest to not use laziness like that though. It’s a cute trick that can end up doing more harm than good.

I figure it can’t. I guess only time and experience will bear out which prediction is correct.

2 Likes

I’ve had something strange happening to me when using Seq. the code goes something like this:
\x -> x `deepSeq` maybe () (error . show) (unsafeNoThunks x) `seq` ….

And this failed telling me that x (a Seq) was a thunk. (So no other context, just the Seq itself)

Edsko said there was some problem with how Seq works internally, does anybody know more?

Thanks in advance.

When you say “Seq” do you mean Data.Sequence.Seq? It’s a bit unclear when you’re also talking about Prelude.seq!

Yes, I do! (I was actually pretty close to putting a note that the difference in capitalisation was no accident '^^

1 Like

Can you share some code that exhibits this behaviour? I couldn’t replicate it with a quick experiment.

Probably not easily, I might have gotten rid of the change again ^^’

Maybe it was just a fluke but it kept me thinking so I thought I might just ask.