GHC 9.10.1-alpha3 is now available

This is in line with what I tell people when new major GHC releases are published: “the .1 release is basically an RC that serves to catch bugs from downstream codebases, you’re better off with the .2 release”.

1 Like

As a somewhat-neutral observer, maybe I can head off a miscommunication that seems to be brewing here.

The GHC team feel they are under pressure from the community to make releases more quickly. Now they are getting pressure to make releases more slowly. If we go back to GHC Medium-Term Priorities from a year ago, “we received feedback that our release cadence is too fast, and other feedback that it is too slow.”

Maybe those two types of feedback are talking about different aspects of a release? Maybe “faster” means “more versions released per year” and “slower” means “a longer period of time between RC1 and the final release”. But then again, maybe different quarters of the ecosystem simply want different things. Without more data it will be hard for the GHC team to make adjustments.


Certainly, it’s important to realise that the crux of the issue is the lack of compartmentalisation between improvements that could be shipped faster as minor releases, and the necessary breaking changes to TH and syntax, that ought to happen with more time between each, so that third-party maintainers don’t have to work around the clock to get their projects and libraries up-to-date.

Faster, non-breaking, minor releases mean that the tooling and methodology for making releases improves and the process becomes less painful. That’s a net win for the release engineering team.

(keyword is “tick-tock releases”)

1 Like

I have no idea where this comes from and my experience as GHCup maintainer indicates the opposite is true: people long for less releases, but more high quality and with less breakage.

I wonder where this difference in perception comes from.

I have no idea where this comes from and my experience as GHCup maintainer indicates the opposite is true: people long for less releases, but more high quality and with less breakage.

My guess would be that people mostly long for high quality and less breakage, and the release cadence primarily affects how salient those things are.

My impression from the rest of the industry is that the usual belief is that you can’t get better quality and less breakage by releasing less often. If you have a process that allows you to produce low quality releases with lots of breakage, then slow releases will just accumulate large amounts of quality issues and breakage. Whereas if you have a good process, then you can release as often as you like. And pushing for a faster cadence puts pressure on the process, hopefully leading to improvement. As the saying goes “if it hurts, do it more often”.

(Plus, faster releases mean smaller batches, shorter queues, shorter cycle times, faster feedback, etc. etc. Lots of good stuff if you can get it.)

Which is to say, I’m not sure that slower releases would actually help. I think the only thing that will help is continuing to work on making the GHC development and release process produce releases with fewer bugs and less breakage.


This duality can be achieved with nightlies (see rust), while maintaining a very slow cadence on the stable channel. It may be more work for the compiler team, but nothing comes for free. It is about priorities in the end.


This study doesn’t seem to support that rapid releases cause higher quality either:

If you find a more recent study, that would be interesting.

In my own experience, the major benefits of rapid releases are for the project and less so for the end users, because you’re increasingly utilizing end users for QA purposes, but in a low impact way. That still causes churn for everyone involved (including open source supply chain), but that’s not work the project maintainers do and as such is easy to neglect.

With nightlies (and prereleases), this becomes opt-in and is more transparent, imo.


I believe this seeming paradox can be resolved by the hypothesis that what both groups want is less busywork per unit time.

I suspect that the group that wants GHC to release faster thinks they will have less busywork if they can address breaking changes in smaller increments, and that the group that wants GHC to release slower thinks they will have less busywork if they can address more breaking changes in one go, to avoid frequent context switches.

The simplest way to please both groups is, I think, to cause less busywork by making fewer breaking changes.

1 Like

This is an argument to increase throughput, while my argument above was to increase latency.

1 Like

The GHC team are also downstream of certain stakeholders: namely, the people and organizations that fund or directly contribute new features. I’m sure they don’t like waiting 6 months to get the thing they already worked for.

Indeed. The efforts to decouple GHC’s unstable internals from the Template Haskell API, GHC’s own API, and the base library seem like important work in that direction.

1 Like

I’m afraid this seems like a non-argument.

This is what nightlies and prereleases can deliver. It allows to merge features early, allows ghcup to distribute bindists for use of of those early adopters without compromising the experience of the larger Haskell community.

GHC development being affected by the fear of losing clients/funding seems like a precarious situation (if that is the case). I had hoped that the HF, as a middleman, would be able to alleviate such concerns.

1 Like

I wouldn’t read too much into it. I’m just saying that, as a class, people who implement new features do exist. And if I was one of them, I’d definitely want to see those features in a stable release, so that I could actually use them in production.

1 Like

Again: it boils down to priorities. Do you prioritize new features/contribution experience over, say, average end user experience?

I am interested to know what GHC HQs priorities are. Maybe we’re just talking past each other, maybe not.

And here’s another attempt of pleasing both sides without compromising early merging of new features and average end user experience: Add -experimental flag by angerman · Pull Request #617 · ghc-proposals/ghc-proposals · GitHub

1 Like

This is an interesting conversation but I’m not sure what to take away from it.

Let us just remind ourselves: we all want the same thing, a Haskell ecosystem in which both upstream and downstream contributors feel valued and listened to, and in which friction is minimised. We may have different opinions about how to address that goal, especially with limited resources, but the more we talk to each other the more likely we are to build a healthy ecosystem.

With that in mind, here’s my summary:

  • Everyone wants less breakage. More precisely, (BG1): If a program compiles and works with GHC(X) then it should compile and work with GHC(X+1). This is the goal I express in this GHC stability state of play page. It is surprisingly difficult to acheve, for the reasons laid out there, but progress is being made on a number of fronts.

  • Library updates. Where we fail to achieve (BG1) we are stuck with the library-update problem. If you depend indirectly on package P, and package P needs some modification to compile with GHC(X+1), then you have to wait for all the package authors between you and P to update their packages before you can compile with GHC(X+1). That is why we have head.hackage for testing GHC; and it was the motivation behind the GHC.X.Hackage proposal. I understand the reasons for disliking GHC.X.Hackage; but I don’t know any other way to address the library-update problem, except by achieving (BG1).

    I’m not sure what actions, if any, to take here.

  • Release schedule

    • Within a particular release, if we are forced to delay the schedule we should delay the entire remaining schedule, rather than compressing it to maintain artificial deadlines. E.g. if alpha-3 is delayed, we should delay RC1 too etc.

    • Perhaps we should rename alpha-3 as RC1, since many downstream users may not start work until they have a “release candidate”, for understandable reasons.

    • I’m not getting a strong signal in favour of more frequent releases, or less-frequent releases; just in favour of high-quality releases.

Does anyone else have actionable take-aways to suggest?


Does anyone else have actionable take-aways to suggest?

Sure, the nightly releases were quite promising to enable library maintainers (like me) to stay on top of the game and catch breaking changes and bugs on real-world codebases earlier. CI setups would greatly benefit from these, as well as people who wish to use experimental features that haven’t made it into a stable branch yet. The moment GHCHQ can vouch for nightly releases being produced and cared for, this will certainly greatly help with the trust in the periodical releases.


I’m in favor of more minor releases and less major releases, so that there are less GHC branches to maintain and the ones that are there will have a longer shelf life, allowing for more backports (not just of bugfixes, but also performance improvements and other things).

At the moment, I can barely update my own code bases to 9.6.x (lots of small problems)… and GHC HQ is already in the process of pushing out 9.10. I don’t see how I can keep this up, honestly. GHCup still recommends 9.4.8, but it’s effectively already abandoned by GHC HQ: GHC Status · Wiki · Glasgow Haskell Compiler / GHC · GitLab

Other alternatives were discussed previously, but I don’t remember all the arguments against it… I don’t have a lot of visibility on the pain points and cost centers of GHC maintenance:

  • GHC LTS releases (long-running branches)
  • rust model: very conservative stable releases, fast iterating nightlies (this might effectively be like maintaining two compilers)
  • language editions (so that I can opt out of all breaking changes, even with newer compilers)

So I won’t put forward an opinion on whether any of those are feasible.


GHC nightlies were unfortunately a failure and their advertisement has been removed from GHCup (@chreekat agreed to that at the time). That was one of the things that motivated the midstream bindist proposal:

As it stands, nightlies require robust architecture and persistent storage to be usable and that may be an unnecessary time sink for GHC HQ. However, I don’t know if I will have time for that this year either.

1 Like

If you have the energy, it would be great if you could write down what problems you ran into

1 Like

I forked off a reply GHCup changes required for GHC 9.6

1 Like