tl;dr I think the Haskell Foundation should focus on low-hanging fruit (namely documentation and guidelines) for most of the problems outlined in @ozataman’s original post, rather than focusing its effort primarily on technical improvements to GHC.
Additionally I believe that the Haskell Foundation should strive to be as unopinionated as possible in its focus and recommendations, and that separate “Working Groups” should be established to address concerns for specific areas (e.g. DevOps, Nix, etc.).
I want to preface my response by saying that I think all of the Haskell-specific points in this post are worth thinking on and improving in GHC; I’m taking the time to write this up for two reasons:
- So that other potential industrial Haskell users can potentially chart a path for themselves that avoids these problems without having to wait for GHC to implement technical improvements
- To encourage the folks working on a technical agenda to focus on what I believe to be the “lowest hanging fruit” for us as a community: establishing a set of best practices, design patterns, and general frameworks for architecting large Haskell applications that avoid these sorts of pitfalls
tl;dr We should always strive to improve GHC’s performance, but for most users I posit that the slowdowns people have experienced can be immediately mitigated by a proper set of best practices
I’m going to make the, perhaps bold, assertion that GHC’s performance as a compiler is actually mostly fine, and that the vast majority of compile-time regression which users encounter is a result of writing code and pulling in dependencies which rely on the increasingly advanced language features that GHC incorporates.
I’m not suggesting that compilation time isn’t a good metric to set for improvement (regression-testing compiler performance between releases is a good thing), but that for most codebases that I have experience with the issue is more often that Haskell developers tend to reach for extremely baroque language features to accomplish common tasks.
In other words: Haskell programmers should learn from C++ developers and establish some guidelines for:
- practical “novelty budgets” within common classes of Haskell codebases
- the performance implications of certain language features which often tend to make up these novelty budgets
- playbooks for recovering from situations where unmitigated technical debt has “blown” a project’s novelty budget
- I imagine there are plenty of industrial users who have ended up in this situation and might be able to offer some insight
Lower Memory Usage
tl;dr As with compilation speed issues, GHC’s resource consumption is very likely to be “adequate” so long as developers avoid features that are known to trip the compiler up unless they are absolutely necessary.
I feel the same way about this as I do about the section above: there are probably many areas where GHC can be improved on a technical level, but the vast majority of memory bloat in most industrial codebases will almost definitely come from people using advanced/complex language features.
I feel fairly safe in making the claim that GHC (and much of the associated tooling) should be fine dealing with projects upwards of 1000 modules and 100k LoC without much issue as long as the project is well-architected.
Likewise (although you didn’t mention it), the runtime footprint of Haskell applications can be extremely small yet many industrial codebases can often find themselves in a situation where they consume vast amounts of memory or burn countless CPU cycles.
The key point, in my mind, is for us to understand what sorts of constructs/behaviors cause GHC’s resource consumption to balloon (both at compile-time and at runtime), and to avoid them as much as possible.
Mac Support Needs to Get Better
As far as I’m aware, linker errors on macOS these days are wholly confined to Nix build setups.
If there are mitigations on the GHC side of things which can improve this, we should certainly pursue them, however I don’t think it’s worth prioritizing this as I’ve not heard of any from
cabal-install users in recent memory.
A Very Clean, Extensible, Nix Project Setup/Skeleton
tl;dr Industrial users of Haskell and Nix should establish a “Nix Working Group”, but the Haskell Foundation (in general) should not concern itself with advocating for/addressing problems with Nix in any official capacity.
I don’t think that the Haskell community should get into the habit of evangelizing Nix, especially in industrial environments.
Nix is extremely complex, arguably moreso than Haskell itself; Haskell users are already much more likely to recommend Nix than users of other languages, and I worry that this gives the impression that Nix is the One True Way to develop and deploy Haskell applications.
A Clear, Easy Path to Modern DevOps-Style Deployments
tl;dr I feel very strongly that the Haskell Foundation should strive to be as unopinionated as possible in advocating for build/deployment/ops solutions so that we can maximize Haskell’s potential for industrial adoption.
Much like with Nix, I think that while it might be useful to establish a “DevOps Working Group” it shouldn’t be something that the Haskell Foundation should directly concern itself with beyond making sure that Haskell tooling conforms with whatever the state of industrial “best practices” is for artifact generation.
Realistically, I think that there are orders of magnitude more resources available for DevOps best practices than for Haskell these days; we shouldn’t expend the effort in an area where any competent company should be expected to educate themselves accordingly.
I think, as I’ve probably expressed in some of my other comments, the Haskell community does itself a disservice by tying itself so strongly to Nix. I get the impression that many community members feel that the “ideal” solution is something involving
nix-copy-closure and a bunch of NixOS machines. While this may be nice for an organization that’s bought into Nix/NixOS, it’s extremely non-standard in the realm of industrial application deployment.