GHC 9.10.1-alpha3 is now available

@facundominguez or @RyanGlScott, do you know why there are so many patches in head.hackage? If it’s only supposed to be for enabling building HEAD, it seems odd that there are patches going back to 2019 in there.


I think trying to upstream patches more from head.hackage before a release and keeping track of the status of patches both sound like good ideas. Though I’m somewhat sceptical that it would be worth blocking a release over this stuff.

At the end of the day, stuff like this boils down to capacity. Redundant patches get removed when someone realises and makes an MR. head.hackage could always benefit from more people writing patches, from more people using them, and from more people upstreaming and tracking the status of upstreamed patches.

Though it doesn’t always make sense to expect upstream patches to be released before a GHC release:


A couple of questions:

  1. In what way do you anticipate head.hackage proliferating outside of GHC development?
  2. What are the changes to GHC that are causing these packages to need to be patched?

Several packages with head.hackage patches are no longer maintained (e.g., FPretty and critbit). As such, their head.hackage patches have never been removed, as there haven’t been new Hackage releases that would render the patches obsolete.

Arguably, we should remove these patches after a certain amount of time has passed, although I haven’t found the time to do so. Contributions would be welcome here.


So what makes a patch/package go into head.hackage? Are there strict criteria? What is the main use case precisely?

6 posts were split to a new topic: head.hackage usage

Unofficial GHC JS cross bindists have been built by me for this release (provisional platform/distro support):

And they’ve been added to the ghcup-cross-0.0.8.yaml channel: Add javascript-unknown-ghcjs- by hasufell · Pull Request #203 · haskell/ghcup-metadata · GitHub

To try, you first need emscripten:

git clone
cd emsdk
./emsdk install latest
./emsdk activate latest
source ./

Then install the GHC cross via ghcup:

ghcup config add-release-channel
emconfigure ghcup install ghc --set javascript-unknown-ghcjs-

Then do some hello-world:

echo 'main = putStrLn "hello world"' > hello.hs
javascript-unknown-ghcjs-ghc -fforce-recomp hello.hs

Also see


The main use-case of head.hackage as far as I’m concerned is to facilitate testing of GHC. For better or worse, source changes sometimes are needed to build projects with GHC HEAD. head.hackage exists to collect these patches in one place so that we (and possibly our users) can readily test their projects against the compiler.

I proposed to extend the mandate of head.hackage to address end-user migration explicitly in the ghc.X.hackage proposal although there was little appetite in the community for this. Ultimately I think this is fine; we all agree that it would be better if we rather tried to minimize the need for head.hackage.

I agree with @hasufell that we cannot create a dependency between the GHC release process and the updating of the ecosystem. Managing a GHC release is already very tricky, with far too many “known unknowns” which make adhering to a concrete schedule a challenge. Adding yet more such constraints to the process would make the process significantly more costly than it already is.


The primary motivation for the short gap between release candidate and final is to maintain the original final release date despite having pushed back alpha2 by a week. In principle the alpha series exists released specifically to allow for a wide range of testing early, giving us plenty of time to address issues before the final release. However, I recognize that we often don’t see much vigorous testing until later in the series. Consequently, perhaps it would be preferable to err on the side of giving more time to the late alphas and release candidates.

All of this is to say, I would be fine with moving the final release back by a week to allow more time to testing.

1 Like

Like alpha2, if you want to use with Stack:

  1. Upgrade to the master branch version of Stack (can be used only if you are not using GHCup to manage versions of Stack): stack upgrade --source-only --git.

2A. If you are not using GHCup to manage versions of GHC, augment Stack’s default setup-info dictionary in a configuration file (needed only until Stack has fetched the compiler once). For example, on Windows:

        # Can be extended with SHA protections etc: see

2B. If you are using GHCup to manage versions of GHC, augment ~/.ghcup/config.yaml. For example:

  - StackSetupURL
  - setup-info:

            # Can be extended with SHA protections etc: see
  1. Specify the compiler in a Stack configuration file (eg stack.yaml):
compiler: ghc-

Fascinatingly, GHCup also allows to mix the stack metadata with any GHCup metadata channel.

E.g. if you want the stack logic for bindists + ghcup logic for prereleases, you can do:

  - StackSetupURL 

No need to write your own setup-info dictionary.

1 Like

I’m afraid you are tired of my usual rant, but community at large does not care whether a release is a week or even a month later. There are significant costs to pay for each minor GHC release: ghcup has to support it, parsers and exact printers have to be updated, hls has to support it and make a new release, Docker images, Stackage, Stack, Haskell-CI, haskell-actions/setup, etc. There is no point to incur all these costs for the sake of an artificial deadline.

Since alpha releases do not guarantee to be feature-full and more breakage can be introduced before release, maintainers should not relax build-depends bounds or make new releases of their packages until RC is out. It is expected that almost no testing outside of root packages happens until RC1.

Maybe you want to adjust the nomenclature of GHC releases and mark, say, alpha3 as RC1? This is assuming you can guarantee that no further breakage is happening afterwards.

A gap of two weeks between RC1 and 9.10.1 means that maintainers have essentially one weekend to test, adjust and release everything. I’d rather release 9.10.1 as RC2 and then make 9.10.1 in time when you’d normally do 9.10.2.


This is in line with what I tell people when new major GHC releases are published: “the .1 release is basically an RC that serves to catch bugs from downstream codebases, you’re better off with the .2 release”.

1 Like

As a somewhat-neutral observer, maybe I can head off a miscommunication that seems to be brewing here.

The GHC team feel they are under pressure from the community to make releases more quickly. Now they are getting pressure to make releases more slowly. If we go back to GHC Medium-Term Priorities from a year ago, “we received feedback that our release cadence is too fast, and other feedback that it is too slow.”

Maybe those two types of feedback are talking about different aspects of a release? Maybe “faster” means “more versions released per year” and “slower” means “a longer period of time between RC1 and the final release”. But then again, maybe different quarters of the ecosystem simply want different things. Without more data it will be hard for the GHC team to make adjustments.


Certainly, it’s important to realise that the crux of the issue is the lack of compartmentalisation between improvements that could be shipped faster as minor releases, and the necessary breaking changes to TH and syntax, that ought to happen with more time between each, so that third-party maintainers don’t have to work around the clock to get their projects and libraries up-to-date.

Faster, non-breaking, minor releases mean that the tooling and methodology for making releases improves and the process becomes less painful. That’s a net win for the release engineering team.

(keyword is “tick-tock releases”)

1 Like

I have no idea where this comes from and my experience as GHCup maintainer indicates the opposite is true: people long for less releases, but more high quality and with less breakage.

I wonder where this difference in perception comes from.

I have no idea where this comes from and my experience as GHCup maintainer indicates the opposite is true: people long for less releases, but more high quality and with less breakage.

My guess would be that people mostly long for high quality and less breakage, and the release cadence primarily affects how salient those things are.

My impression from the rest of the industry is that the usual belief is that you can’t get better quality and less breakage by releasing less often. If you have a process that allows you to produce low quality releases with lots of breakage, then slow releases will just accumulate large amounts of quality issues and breakage. Whereas if you have a good process, then you can release as often as you like. And pushing for a faster cadence puts pressure on the process, hopefully leading to improvement. As the saying goes “if it hurts, do it more often”.

(Plus, faster releases mean smaller batches, shorter queues, shorter cycle times, faster feedback, etc. etc. Lots of good stuff if you can get it.)

Which is to say, I’m not sure that slower releases would actually help. I think the only thing that will help is continuing to work on making the GHC development and release process produce releases with fewer bugs and less breakage.


This duality can be achieved with nightlies (see rust), while maintaining a very slow cadence on the stable channel. It may be more work for the compiler team, but nothing comes for free. It is about priorities in the end.


This study doesn’t seem to support that rapid releases cause higher quality either:

If you find a more recent study, that would be interesting.

In my own experience, the major benefits of rapid releases are for the project and less so for the end users, because you’re increasingly utilizing end users for QA purposes, but in a low impact way. That still causes churn for everyone involved (including open source supply chain), but that’s not work the project maintainers do and as such is easy to neglect.

With nightlies (and prereleases), this becomes opt-in and is more transparent, imo.


I believe this seeming paradox can be resolved by the hypothesis that what both groups want is less busywork per unit time.

I suspect that the group that wants GHC to release faster thinks they will have less busywork if they can address breaking changes in smaller increments, and that the group that wants GHC to release slower thinks they will have less busywork if they can address more breaking changes in one go, to avoid frequent context switches.

The simplest way to please both groups is, I think, to cause less busywork by making fewer breaking changes.

1 Like