Priorities for upcoming GHC releases

I really meant from, say, GHC 9.8 to 9.10 rather than first number version bumps.

EDIT: To elaborate, I mean ‘GHC as a tool’, as opposed to ghc as a library. When I read, say, 2.1. Version 9.10.1 — Glasgow Haskell Compiler 9.10.1 User's Guide the ‘breaking changes’ from GHC 9.8, as a tool, do not jump out at me.

I really meant from, say, GHC 9.8 to 9.10 rather than first number version bumps.

The policy as I understand it is this:

  • Each six-monthly release of GHC uses a 0.2 version bump, thus from 9.8 to 9.10, or 9.10 to 9.12.
  • “Patch-level releases” change only the minor version. Thus 9.8.1, 9.8.2, 9.8.3 etc. These patch level releases fix bugs but are intended to change absolutely nothing else, not features, not APIs, nothing.
  • The odd-level major versions, 9.7.X, 9.9.X, etc are used for development builds, never for releases.
  • We do not have a formal policy for moving from 8.X to 9.X, or from 9.X to 10.X.

Would it be helpful to write down these four points? For example here: releases · Wiki · Glasgow Haskell Compiler / GHC · GitLab

4 Likes

Thanks! What is on my mind is this:

‘industry’ has to think about the ‘costs’ of change as well as the ‘benefits’. However, if change actually ‘breaks’ nothing (or little), only ‘adds’, the ‘costs’ of change might be low. I am wondering if we (as a community) could do more to communicate what is actually ‘broken’ by change, making it easier for ‘industry’ to identify and assess that ‘cost’ and - perhaps - lowering the perceived barrier to moving forward.

(EDIT1: Taking this opportunity to acknowledge @tomjaguarpaw’s Upgrading from GHC 8.10 to GHC 9.6: an experience report)

(EDIT2: I may have underestimated what the GHC project already does, given the migration guides for each of GHC 7.8 to GHC 9.12.)

2 Likes

That suggestion sounds resaonable, but it’s surprisingly hard to execute.

  • New features are (increasingly) not a source of breakage; even when they are, you often get a deprecation cycle or two. See GHC Steering committee stability principles

  • Fixing bugs, on the other hand, really can cause breakage – and does! It turns out that people sometimes unwittingly rely on bugs. It’s really really hard to predict this.

  • Library changes are a major source of breakage. This is silly; just because you change from GHC X to GHC X+1 doesn’t mean you should have to change libraries. But currently you do – See GHC stability state of play

It’s hard to know how to communicate all this better, but if there are simple things we can do that would help our users, we’re all ears. Perhaps you can float some ideas? (Remembering the opportunity cost: doing X means not doing Y.)

8 Likes

We could have:

  1. latest stable branch (mind you, this one can be a little slower too if we have nightlies… there’s less pressure to get releases out “just because”)
  2. previous stable branch
  3. LTS branch

Then we always have two candidates for LTS.

But that might mean to choose a different release cycle (more than half a year for a new major version). Otherwise the versions “run” too fast. I believe that’s already the case, but it will be worse when there’s only two short term supported branches.


The way I see it is that the current “stable” releases are (ab)used to get experimental features out. This can all be delivered with well working nightlies support instead.

I find it hard to foresee the involved workload for GHC devs though. But this makes sense for end users imo.

So instead of nightly builds, what about monthly or “fortnightly” (i.e. twice a month) releases? If that can be made to work…then weekly builds could be possible. But the impression I’m getting is that 28-31 builds per month is too large an opportunity cost, amongst others:

Could you outline what you mean by “latest/previous stable branch”? What goes on those branches?

We are about to release 9.12. Perhaps you mean something like the 9.10 (latest) and 9.8 (previous) series? When 9.12 is released, then “nightly” will follow 9.13 (what will become 9.14), “latest” will follow 9.12 and “previous” will follow 9.10. Correct?

This means we can afford one additional LTS branch. Even that is a stretch, because the cost of backporting fixes rises as the LTS branch diverges more from master, so maintaining a 2.5 year old LTS release is more costly than additionally maintaining 1 year old 9.6.

Moreover, how do you pick the new LTS release? That is, after we have used GHC 8.10 for 3 years (1 year regular support + 2 years LTS), what is the next LTS version? Should it be 9.8 or 9.10? What if 9.8 is (sadly) known to be bug-ridden, 9.10 is better but it’s too fresh to judge? What is the policy here? Keep maintaining 8.10 for another half year (but the maintenance cost!) or embrace 9.10 instead? I’m uncertain how industry users respond to immediately having to migrate from 8.10 to 9.10 because the former lost its LTS status in favour of the latter. I appears that we need to notify users at least half a year before we make the switch from 8.10 to 9.10, so that they may migrate from one supported version to another. So, in practice 3 year LTS means that LTS releases are 2.5 years apart to have overlapping support windows.

3 Likes

My preference is for our limited capacity to be spent working towards making it easier to upgrade GHC (better stability guarantees, tooling for upgrades, etc) rather than maintaining old versions.

Maintaining an LTS allows you to delay upgrading your GHC. But you will still eventually have to do it.

Upgrading to a new version of GHC can be broken down into the cost of upgrading your own code, and upgrading your dependencies. The former is a fixed cost. The second is more variable. When a new GHC is released many libraries will need version bumps, etc, to be compatible with the new version. An early adopter would have to do all this work themselves. But as time goes on the community will do more of this work. In my experience, most patches get made and released relatively quickly, but there’s often a long-tail of packages that take longer to be made compatible.

Waiting a bit longer before trying to upgrade is therefore helpful because you hope that other people will have done the work to upgrade libraries in the ecosystem. But I think you eventually get diminishing returns. But if everyone waits, then no one writes the patches(!). I think we often see this dynamic. For instance, when Stackage nightly gets bumped there are a flurry of patches, but it would be great if people started this work sooner.

I’m quite excited for more core libraries becoming reinstallable (template-haskell, base) with coming releases of GHC. I think this will make upgrading orders of magnitude easier. With that and other stability improvements we’ve already had, I’m really hoping that more people will upgrade sooner, and we can all reap the rewards from that.

10 Likes

This sounds reasonable. Currently:

Current branch, prior branch, prior prior branch.

Proposed:

Current branch, prior branch, some earlier branch.

And periodically, each when the “current branch” shifts upwards by one and becomes the prior branch, we evaluate if the now prior prior branch should become the new LTS branch, or just fall out of maintenance and we preserve the existing LTS branch.

So, maintenance-wise, the only difference from now is that if the LTS branch lasts longer, patches will typically become harder to backport. But that seems unavoidable for longer-lived branches, no matter how structured. And who knows how much older that branch will get – maybe only a year more, maybe two.

The one other thing I’d note is that earlier it was proposed that LTS branches also get performance improvements. I think this is probably mainly off the table – performance work in my experience is often pervasive, involving many small changes, and only makes a noticeable difference in combination with other performance work – the hardest stuff to backport. (As opposed to bugfixes which ideally are localized and individually impactful). I would think that typically such stuff should not be backported – only bug fixes.

5 Likes

I hope not. Because it is more work for me and for every single distributor.

If we had LTS releases, I could focus more complex and time-intensive bindist work on those, e.g. Build fresh versions of old bindists · Issue #903 · haskell/ghcup-hs · GitHub
The same goes for tier 2 platform support. It’s going to suffer (it already is). And it seems it’s becoming a habit to increasingly outsource certain work onto distributors (e.g. me). I’m not pleased.

It’s similar for HLS. They can focus on supporting less versions, so as long as the latest LTS is supported.

New releases are not for free.

4 Likes

Upgrading to a new version of GHC can be broken down into the cost of upgrading your own code, and upgrading your dependencies.

If I may, this is a tad simplistic: you forget the opportunity cost. Few organisations have people dedicated to taking care of GHC, and so for most of us, upgrading GHC comes with the cost of not doing anything else.

For reference, at work we have made the main monolith compile with GHC 9.6.6 last week. Unless there are critical bugs that will never get fixed for the release family we use, we have little interest in migrating every time a new GHC version comes out, because it simply does not bring anything to the business.

7 Likes

My understanding is that the core problem we are facing is that it takes a long time for a GHC release to become “stable” enough for it to be recommended. In some cases, this means it stops being supported by the time it has become “stable”.

A solution we’re discussing is to solve this by increasing the support window for certain LTS releases.

But I think we are taking this idea that it has to take a long time for a release to become “stable” as given. What I’m suggesting is that it might be better to focus on reducing the time it takes for a release to become “stable”, and reducing the cost of upgrading. I just wanted to raise this alternative angle.

I’m sorry if I came across as dismissive of your efforts! I am very appreciative of your work on GHCup. Your work is often invisible but crucial.

3 Likes

I agree this has been my experience too. Upgrading GHC is restrictively difficult so it’s delayed until necessary.

My point is that the LTS solution doesn’t help with this all that much. You can stay on a version for a few years longer, but then the fixed costs of making the upgrade will still be there. And now there will probably be quite a big jump to the next LTS release.

My hope is that we can decrease the difficulty of these upgrades so people can do them sooner.

Currently there is a vicious cycle. Upgrades are expensive to do, so industrial users hold back. This means new versions of GHC have fewer users, so fewer bugs get caught. Fewer users mean that maintainers get fewer compatibility patches and this makes it even harder for other users to upgrade.
By reducing the cost of upgrading, I hope we can turn this into a virtuous cycle.

5 Likes

Given the situation with GHC 9.8.* series, it’s abundantly clear that 6-month release cadence is unsustainable even for GHC team itself. It’s even less sustainable for unpaid open-source maintainers. Please-please-please, could we stop wasting community resources on busywork and release GHC only once a year?

8 Likes

Ultimately we have to ask GHCHQ directly. @simonpj, @mpickering and @bgamari: What can the public do to submit a petition to have less frequent releases?

1 Like

Hm with yearly releases, how would new features & extensions be managed? Right now it seems the “big things” (e.g. the march towards DH) comes out with each release on a 6mo cadence. Would it then be a 1y cadence and they’d sit around all that time?

One advantage of the current approach is it does strike some balance between stability and innovation. Maybe it’s not the right balance, but also the stability voices tend to be from industry, which will always try to have an outsized influence on GHC thanks to the access to money…so it’s important imo to be wary of that.

4 Likes

Not every industry player asks for the same thing. There are the companies that need stability (and thus predictability), and the companies that actively pay for developing new features in GHC.

2 Likes

Source-only downloads is one option - “early adopters” can build those for themselves which would help to detect more bugs earlier, both in the tools themselves and their build systems.

1 Like

At the risk of adding more noise into the discussion: I think it’s clear that some communication of concerns will go a long way.

In particular, GHC did not always have the release cycle it has now. (I’m a big fan of the Chesterton’s Fence parable):

  • What were the motivations/concerns for the current approach?
  • Were those concerns addressed?
  • What aspects of those concerns would be made more dire by switching to a longer release cycle?

As we all know, the amount of work necessary tends to fill the time allotted, so there’s some benefit to setting a faster pace. At the same time, maybe things have changed enough that it’s time to revisit.

For folks wanting more consistent nightlies, what is the current situation preventing you from accomplishing? Of course having more intermediate states to test is useful, but why stop at nightly? Why not 3 times a day, or hourly! More seriously, on the spectrum of “every commit” to “every major release”, why is nightly the “right” frequency for GHC?

My understanding is that the Haskell community simply does not have the available resources to consistently produce nightly releases. If resources are to be redirected to accomplish this, it would need to be because there is a very deep/costly concern.

My proposal: while discussion here is good and worthwhile, I think it’s time for us to thinking about writing down these concerns explicitly. There’s a lot of “ideally we would do X”, which is a good way of setting targets to aim for, but not a good way to determine what actions to perform tomorrow.

We just launched blog.haskell.org for exactly this sort of ecosystem-wide communication from core teams. I’ll happily work with folks to write this stuff down without getting bogged down by the sometimes-heated discussion. It may be that they are written down somewhere already. If they’re still up to date, great! If they aren’t up to date, let’s update them.

5 Likes

One nice thing about the rolling releases is that they are some semblance of “official” versions. So people can release to Hackage a library against a newer GHC if they are into experimenting. Feels like you kinda lose that with nightlies/source only but maybe I’m wrong there.

1 Like