Question for anyone who knows stuff about how GHC optimizes things:
I’m working on a Haskell project in which I defined and used my own Either
-isomorphic data type. This involved writing versions of any Either
-handling utilities I wanted to use for this type. I did this in part because I wanted to experiment with the impact of marking one or both of the constructors as strict. (So a simple newtype wrapper plus pattern synonyms wouldn’t be an acceptable alternative.)
I’ve been using weigh
to profile allocations, and I was surprised to note that, even before using any strictness annotations, the number of allocations in my test suite went up after making this replacement. (This is with -O2
.)
Thinking that surely I must have made some accidental change in the process of refactoring, I commented out my custom type and its instances and replaced it with a type synonym pointing to Either
and bidirectional pattern synonyms for the constructors, while keeping all of my replacement utility functions as-is. The allocation count went back down to its original value.
I then dumped the Core output of all relevant modules, commented out the above synonyms, uncommented the original type definition, and dumped the Core output again. The two sets of Core files appear identical, up to local variable names, line breaks, and the names of the two constructors replacing Left
and Right
.
The only two instances for Either
that I’m using are Functor
and Bifunctor
, and I’ve written those instances to be character-for-character identical to their definitions in base
, except for type and constructor names. Everything else that’s executable code should be unchanged when I do the Either
-synonym switcheroo.
So I’m not sure how to explain the difference in run-time behavior. Are there optimizations GHC is applying after the Core simplification passes that might treat literal Either
differently from an isomorphic type?