Maintaining haskell programs

I think code would break much less often if people really just used Haskell2010. A large part of the problem is that so much of the ecosystem uses various extensions which are at different levels of stability, or worse, depends on the internals of GHC itself.

Can anyone name one breaking change that breaks Haskell2010 code? I can only think of base library changes, but you should be able to get a Haskell2010 base library which matches the specification exactly.

1 Like

Everything would be simpler if people didn’t use extensions, but I don’t think it would be the same GHC! Part of GHC’s appeal (for some of its audience) has always been its willingness to experiment, and extensions are a plausible mechanism to do so. Base library changes are a great example of why interop between versions is so fraught.

What I mean to say is that I think you can only have two of these three points:

  • bleeding edge experimental features
  • a guarantee that all your programs will still compile in X years time
  • a simple compiler that can be maintained by a low-budget team

Of course all three points have varying gradations, so it might be possible to strike a better balance than we do now.

6 Likes

That’s a great summary!

I cannot think of any breakage in my code because the use of extensions.

On the other hand I can think of one (minor) breakage that involved no extensions: the split of Monoid into two classes. But this was a very trivial change in my code.

1 Like

I am curious to know which extension have actually broken packakes. My experience with broken packages is 99% due to lower bound of Base needed to be bumped, TemplateHaskell code being broken by some internal change in GHC.

4 Likes

Respectfully disagree - to me it sounds like “maintainability would be much better if we all stick with GHC-8.10.7 (or older - whatever’s the latest GHC-7) and whatever Cabal/Stack that accompanies it”.

Sure, if toolchains did not change, and existing packages’ API didn’t change - maintaining Haskell apps would be a breeze. Unfortunately, that’s not the case. “And if wishes were wings - . . .”

39 years of software development make me think that language extensions are a likely source of maintainability issues. It’s analogous to the maintainability issues for C++ (and other “large” languages) for teams above a certain size. Small teams can decide on, and police, the set of language extensions they use. But this doesn’t work so well in large teams, especially if there is a lot of turnover over time.

2 Likes

What about language extensions does this exactly? As a manager or whatever, I guess I see how it can run afoul of the “engineers are cogs who should be able to get run over by buses” philosophy. But from a technical standpoint, I don’t get how a module being able to turn on LambdaCase, ImplicitParams, LinearTypes, DataKinds, TypeFamilies, etc is gonna be any worse than a project having a messy dependency graph or being untestable (way more common of problems + language agnostic).

4 Likes

I agree that messy dependency graphs and untestable code are also likely to make Haskell code unmaintainable, but, as you point out, these are language agnostic. Language extensions, especially if many of them are used inconsistently across a large project, could greatly increase the difficulty of understanding and modifying the code. I suspect that as more and more extensions come into existence, it might even be possible for a large project to become arbitrarily unmaintainable. Lsmor pointed out some specific difficulties.

1 Like

Maybe it would be helpful to give a concrete example of a program that rotted due to using extensions? Because I’ve never seen a concrete example of a GHC extension actually causing maintenance problems.

When I was new to Haskell, extensions were odd. But is there one (even the most onerous) that is actually some scary problem for a professional software developer? They’re well documented, tightly scoped, and the worst case is the code doesn’t compile in most cases (I guess this is where people tell their RecordWildCard shadowing horror stories :laughing:)

And like I said, “more cognitive load” isn’t a maintenance problem. One more new concept/syntax/whatever doesn’t constitute a “maintenance problem.” I can easily flip the arrow and call it an attitude problem.

Haskell does have costs due to its (good imo) willingness to bork backwards compat. Avoid (success at all costs) and whatnot. That is the “maintenance problem” in my eyes. And it’s trivial and mechanical to work through.

2 Likes

Lots of packages were broken for a long time on 9.0 (many even skipped 9.0 altogether) due to the simplified subsumption changes to the RankNTypes extension.

1 Like

In @Underlap assumption (“for teams above a certain size”), this might well be a problem.

Small teams can develop a cohesive communication strategy and establish a set of shared techniques from which to pick from while developing. The bigger the team (or the more fractured the team), the greater the risk of ending up in a Tower of Babel, where developer from team α has problems understanding the lingo of developer from team β.

Code is written once and read many times; friction is a cost.

5 Likes

that is actually some scary problem for a professional software developer

One language extension that terrifies “Probie the software engineer” (but not “Probie the hacker”) is ImportQualifiedPost.

1 Like

Why is that?



Ok, full disclosure, I’m relatively new to Haskell. I don’t have an example project that’s unmaintainable because of extensions. Hence my guarded language (“likely”, “could”).

But my concerns are backed up by some practical experience. I wanted to contribute to a Haskell project (nameless, to protect the innocent) and the first module I opened up used:

{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE TypeFamilies #-}

(other modules used other extensions). I hate guesswork, so I set about reading the docs for the unfamiliar extensions.

Flexible contexts didn’t seem too threatening, except that the docs were for GHC 9.0.1 and the latest docs are much terser (why?). The older docs were in a section that pointed at the paper Type classes: exploring the design space, which looked like a challenging read in itself.

Type families looked harder. The docs linked to three academic papers and then extended over several pages. The Haskell wiki page on type families was a little gentler, but still pretty off-putting.

I dare say if I regarded getting up to speed on each of these extensions as a project in its own right, I could eventually become sufficiently confident that I would then know how to work with the module in question.

Perhaps the solution is education and hiring, so that there is a sufficient pool of developers familiar with the most common extensions. But from my perspective as a seasoned programmer, that just feels like another symptom of the potential for unmaintainability.

5 Likes

ImportQualifiedPost creates a decision where neither choice matters, and yet will consume a person hour or more of the team’s time every time there’s a discussion about using it, as well as more time documenting that decision. Ok, so it only costs a few hundred dollars (which might blow out to one or two thousand dollars depending on team dynamics - I’ve been at places where bikeshedding arguments have gotten out of control), but it doesn’t feel like money well spent.

6 Likes

IME ImportQualifiedPost is fine. You enable it alongside -Wprepositive-qualified-module and you’re back to one choice.

Something like BlockArguments is far more problematic because there’s not even an hlint check for redundant $ with it, so you have potential for discussion in each code review.

5 Likes

What is folks’ opinion on BlockArguments? I am using it, but I sense it could be controversial.

1 Like

If you want to avoid an hour’s discussion on such matters I suggest an opinionated code formatter (at work we use Ormolu, and I’m very happy with it).

3 Likes