Priorities for upcoming GHC releases

This sounds reasonable. Currently:

Current branch, prior branch, prior prior branch.

Proposed:

Current branch, prior branch, some earlier branch.

And periodically, each when the “current branch” shifts upwards by one and becomes the prior branch, we evaluate if the now prior prior branch should become the new LTS branch, or just fall out of maintenance and we preserve the existing LTS branch.

So, maintenance-wise, the only difference from now is that if the LTS branch lasts longer, patches will typically become harder to backport. But that seems unavoidable for longer-lived branches, no matter how structured. And who knows how much older that branch will get – maybe only a year more, maybe two.

The one other thing I’d note is that earlier it was proposed that LTS branches also get performance improvements. I think this is probably mainly off the table – performance work in my experience is often pervasive, involving many small changes, and only makes a noticeable difference in combination with other performance work – the hardest stuff to backport. (As opposed to bugfixes which ideally are localized and individually impactful). I would think that typically such stuff should not be backported – only bug fixes.

5 Likes

I hope not. Because it is more work for me and for every single distributor.

If we had LTS releases, I could focus more complex and time-intensive bindist work on those, e.g. Build fresh versions of old bindists · Issue #903 · haskell/ghcup-hs · GitHub
The same goes for tier 2 platform support. It’s going to suffer (it already is). And it seems it’s becoming a habit to increasingly outsource certain work onto distributors (e.g. me). I’m not pleased.

It’s similar for HLS. They can focus on supporting less versions, so as long as the latest LTS is supported.

New releases are not for free.

3 Likes

Upgrading to a new version of GHC can be broken down into the cost of upgrading your own code, and upgrading your dependencies.

If I may, this is a tad simplistic: you forget the opportunity cost. Few organisations have people dedicated to taking care of GHC, and so for most of us, upgrading GHC comes with the cost of not doing anything else.

For reference, at work we have made the main monolith compile with GHC 9.6.6 last week. Unless there are critical bugs that will never get fixed for the release family we use, we have little interest in migrating every time a new GHC version comes out, because it simply does not bring anything to the business.

6 Likes

My understanding is that the core problem we are facing is that it takes a long time for a GHC release to become “stable” enough for it to be recommended. In some cases, this means it stops being supported by the time it has become “stable”.

A solution we’re discussing is to solve this by increasing the support window for certain LTS releases.

But I think we are taking this idea that it has to take a long time for a release to become “stable” as given. What I’m suggesting is that it might be better to focus on reducing the time it takes for a release to become “stable”, and reducing the cost of upgrading. I just wanted to raise this alternative angle.

I’m sorry if I came across as dismissive of your efforts! I am very appreciative of your work on GHCup. Your work is often invisible but crucial.

3 Likes

I agree this has been my experience too. Upgrading GHC is restrictively difficult so it’s delayed until necessary.

My point is that the LTS solution doesn’t help with this all that much. You can stay on a version for a few years longer, but then the fixed costs of making the upgrade will still be there. And now there will probably be quite a big jump to the next LTS release.

My hope is that we can decrease the difficulty of these upgrades so people can do them sooner.

Currently there is a vicious cycle. Upgrades are expensive to do, so industrial users hold back. This means new versions of GHC have fewer users, so fewer bugs get caught. Fewer users mean that maintainers get fewer compatibility patches and this makes it even harder for other users to upgrade.
By reducing the cost of upgrading, I hope we can turn this into a virtuous cycle.

5 Likes

Given the situation with GHC 9.8.* series, it’s abundantly clear that 6-month release cadence is unsustainable even for GHC team itself. It’s even less sustainable for unpaid open-source maintainers. Please-please-please, could we stop wasting community resources on busywork and release GHC only once a year?

7 Likes

Ultimately we have to ask GHCHQ directly. @simonpj, @mpickering and @bgamari: What can the public do to submit a petition to have less frequent releases?

1 Like

Hm with yearly releases, how would new features & extensions be managed? Right now it seems the “big things” (e.g. the march towards DH) comes out with each release on a 6mo cadence. Would it then be a 1y cadence and they’d sit around all that time?

One advantage of the current approach is it does strike some balance between stability and innovation. Maybe it’s not the right balance, but also the stability voices tend to be from industry, which will always try to have an outsized influence on GHC thanks to the access to money…so it’s important imo to be wary of that.

4 Likes

Not every industry player asks for the same thing. There are the companies that need stability (and thus predictability), and the companies that actively pay for developing new features in GHC.

2 Likes

Source-only downloads is one option - “early adopters” can build those for themselves which would help to detect more bugs earlier, both in the tools themselves and their build systems.

1 Like

At the risk of adding more noise into the discussion: I think it’s clear that some communication of concerns will go a long way.

In particular, GHC did not always have the release cycle it has now. (I’m a big fan of the Chesterton’s Fence parable):

  • What were the motivations/concerns for the current approach?
  • Were those concerns addressed?
  • What aspects of those concerns would be made more dire by switching to a longer release cycle?

As we all know, the amount of work necessary tends to fill the time allotted, so there’s some benefit to setting a faster pace. At the same time, maybe things have changed enough that it’s time to revisit.

For folks wanting more consistent nightlies, what is the current situation preventing you from accomplishing? Of course having more intermediate states to test is useful, but why stop at nightly? Why not 3 times a day, or hourly! More seriously, on the spectrum of “every commit” to “every major release”, why is nightly the “right” frequency for GHC?

My understanding is that the Haskell community simply does not have the available resources to consistently produce nightly releases. If resources are to be redirected to accomplish this, it would need to be because there is a very deep/costly concern.

My proposal: while discussion here is good and worthwhile, I think it’s time for us to thinking about writing down these concerns explicitly. There’s a lot of “ideally we would do X”, which is a good way of setting targets to aim for, but not a good way to determine what actions to perform tomorrow.

We just launched blog.haskell.org for exactly this sort of ecosystem-wide communication from core teams. I’ll happily work with folks to write this stuff down without getting bogged down by the sometimes-heated discussion. It may be that they are written down somewhere already. If they’re still up to date, great! If they aren’t up to date, let’s update them.

5 Likes

One nice thing about the rolling releases is that they are some semblance of “official” versions. So people can release to Hackage a library against a newer GHC if they are into experimenting. Feels like you kinda lose that with nightlies/source only but maybe I’m wrong there.

1 Like

It’s worth pointing out the plan for the current release priorities and schedule is the result of a long history and not simply the result of the GHC Team picking a convenient schedule. My impression is that it was set with the goal of settling on something that works with the resources available while still serving users well.

To make this more obvious it might be helpful to go into some of the history:

The first major event I witnessed in this regard was this blog post by ben in 2017: Reflections on GHC's release schedule — The Glasgow Haskell Compiler which first suggested the current goal of a 6 month cadence.

As I remember it not everyone was in favor of faster releases but the majority of user feedback was positive about a faster release cycle. Discussions obviously happened in many places but the ones still easily accessible seem to confirm this with the mailing list and reddit being overwhelmingly positive about these plans at the time.

A few years later the topic of lengthening the cycle to yearly was brought up on the ghc issue tracker. In my opinion this ticket does a stellar job outlining some benefits of a longer schedule, and surveying other projects. But ultimately the comments were mixed with comments both for and against longer cycles and it was ultimately closed by the contributor who opened it without much happening.

Yet another two years later ben yet again set out to gather feedback from users with the idea for a tick-tock release cycle. Which got mixed feedback, and with interest fading it was eventually abandoned.

This isn’t a complete history but even this should make clear that the current schedule wasn’t arrived at without reflection or consideration of the trade offs involved.


What does that tell us about the reasons for the current cadence

I would summarize the motivations for the current cadence as I perceive them as such:

  • Releases with a year or more between major releases were seemingly not very popular in the past. (Based on the reactions to the 2017 blog post and change of schedule).
  • Why exactly this was the case isn’t easily determined, but after reading through the things linked above as well as some more it seems to be a mix of:
    • A large delay between feature work and it being available to users.
    • Major updates often causing large amounts of breakage.
    • Users locked into boot library versions often had to wait rather long for bug fixes.
  • For GHC contributors there were additional drawbacks:
    • Large lag between changes in GHC and feedback (for features and bugs both).
    • It caused a “mad rush” period in GHC before a release where contributors frantically tried to get their feature merged before the release window.

I don’t think the faster cadence inherently improved “breakage per time” people experienced. But I would say it improved all other aspects of the points mentioned above. However there are also drawbacks. Not only is this discourse thread a witness to that I will also point to the ghc ticket about doing an anual release cycle again which highlights the perspective of the people who bear increased load because of the faster release cadence.

What aspects of those concerns would be made more dire by switching to a longer release cycle?

I would expect all of the above points except for breakage to be made more dire to some extent. However this does not really address the benefits of a longer cycle. Merely that, as with many things, there is no free lunch when it comes to picking a release cadence.

13 Likes

I would think the GHC issue tracker would be the right way to approach this. But if you believe this should be handled by GHCHQ there are instructions about how to bring up issues on the wiki

2 Likes

Nightlies were an initiative by the Haskell Foundation.

The original document is here: GHC Nightlies - Google Docs

There’s various reasons why they have been inconsistent, but none of them look particularly resource intensive to me (also see the calculation in the original document wrt “Bindist Retention”… this calculation works well for nightlies, but not for “every commit”):

I consider it a design issue in CI.

Here’s the stats of nightlies availability: Grafana


The reason this matters is because it’s not an orthogonal project. It has a direct impact on release matters as a whole and addresses those specific points raised by @AndreasPK about contributors getting nervous to have their changes in a release proper.

So they need to be part of the release discussion.

1 Like

In case the leading phrase “This document is not a final plan” in GHC Nightlies didn’t make it clear enough:

  • GHC Status · Wiki · Glasgow Haskell Compiler / GHC · GitLab

Those issues seem “particularly resource intensive” to resolve (at least to to me). But if you really do “consider it a design issue in CI”, I’m reasonably sure chreekat will be grateful for any help you can provide with gitlab.haskell.org's CI infrastructure…

Series Most recent release Next planned release Status
Nightlies N/A N/A See Section 2 below
9.12 None %9.12.1 (#25123) Next major series
9.10 %9.10.1 %9.10.2 (#24374) :large_blue_circle: Current major series
9.8 %9.8.2 %9.8.3 :green_circle: Stable
9.6 %9.6.5 %9.6.6 :green_circle: Stable
9.4 %“9.4.8” None :green_circle: Stable but no further releases planned
9.2 %9.2.8 None :yellow_circle: Stable but no further releases planned
9.0 %9.0.2 None :red_circle: Not recommended for use; no further releases planned
8.10 %8.10.7 None :red_circle: Not recommended for use; no further releases planned

What I mean is:

  • latest stable: 9.10
  • previous stable: 9.8
  • old stable: 9.6

Anything after that doesn’t receive releases anymore.

In the above case and the current situation, I would argue that 9.6 is an LTS candidate and 9.8 releases can be stopped after 9.12 is released. Then you have 3 supported releases: 9.12, 9.10 and 9.6.

I’m aware. That’s why I’m not sure whether it can be done with the current resource constraints, even if we drop one branch.

As I pointed out above, I don’t believe in static policies. The version numbers mean nothing for the end user. Picking every 3rd release or so as LTS is not very interesting, imo, and would mean that distributors and users will have to do their own evaluation anyway whether a specific GHC release is really high quality. Otherwise “LTS” just becomes a badge “we backport here, but it may not be a great release”. This will have to be based on open discourse.

Those are good concerns. But I don’t think we will be able to get it right from the start. And I believe making good decisions about release quality is more important than keeping a specific pace.

I see LTS as an additional service. If it takes off, I would expect it to have positive effects on GHC development itself, helping developers get a better sense of what end users expectations about release qualities are. And once there’s better alignment, maybe it will be possible to set more specific time expectations for LTS releases.

5 Likes

It was linked above but I do want to direct people again to the “tick-tock” release proposal, which I think had a very good motivation and a very rich discussion:

It was closed because the discussion was too diffuse and ghchq didn’t have time to push things towards a consensus. However, people should familiarize themselves with it and perhaps consider a proposal based on it.

That said, if ghchq is open to the “first two and LTS” plan that hasufell proposed above, that seems a much more lightweight change to current practices which might bring significant benefits. Perhaps it should be written up as a tech proposal, just as tick-tock was?

3 Likes

Rather late reply - I held back from commenting in September.

I feel the current multiple overlapping semi-actively maintained stable releases approach seems unsustainable both for upstream and downstream/users (agreeing with @Bodigrim). My suggestion would be for GHC development to move a bit closer to the development models of Rust and Lean4 with time-based minor releases (perhaps bimonthly or every 10 weeks?) which can include feature drops too (but no major breaking changes). Obviously there would still be the development/nightly branch for bleeding edge, from which the next major version can be cut eventually with some cadence (maybe annually or less often - when it is “ready”?). ie Basically only one actively maintained release branch. Hopefully this would also help with the nightly snapshot efforts. It also makes ghc minor releases much more predictable for users, and should increase adoption of the latest stable major versions. On the downside one could expect a bit more cumulative breakage when moving to new major version than now, though it should be kept to a minimum - perhaps incubating major breaking changes longer in the development branch.

Let’s call the stable major version 10, then once 10.x stabilises Stackage LTS would move to ghc-10 and follow its regular minor updates. Perhaps we could test early 10.(x+1) releases first in Stackage Nightly and then promote them to Stackage LTS. Once ghc 11 is released, updates of ghc 10.x would stop completely and it would be considered old stable. ghc-11.1 would first appear in Stackage Nightly and then get promoted to LTS once ghc-11.x is considered stable and adopted enough, etc. Well in principle some short release overlap could be allowed too, but any updates to ghc-10 after ghc-11 would only be bugfix backports already released on the ghc-11 branch: that period would be slightly closer to the current model but with only 2 maintained release branches for a shorter time.

Obviously this would be a major change to ghc development and there are many more details to consider, but I wanted to write down this sketch of the idea and share it here.

Summary of releases in the last ghc9 years: (revised since the wiki page is missing more recent minor releases for 9.2 and 9.4)
2021: 9.0, 9.2 and 5 minor releases for 8.10 and 9.0
2022: 9.4 and 7 minor releases for 9.2 and 9.4
2023: 9.6, 9.8 and 9 minor releases for 9.4, 9.2, 9.6
2024: 9.10, (9.12) and 5 minor releases for 9.6 and 9.8 (to date)

(based on this sheet of ghc releases:
GHC releases - Google Sheets chronologically sorted from Index of /ghc/ and version history · Wiki · Glasgow Haskell Compiler / GHC · GitLab).

5 Likes

Please no.

The quality of the releases don’t have the required level for this approach.

I’d be quitting my work as GHCup maintainer if I had to fix and manage semi-broken bindists every 10 weeks.

GHC HQ does not prioritize distribution quality and I have tried the last 2 years to convince them otherwise:

  • test bindists don’t work well and are not tested (GHC CI setup doesn’t decouple building and testing properly)… so ghcup test is also broken
  • bindist issues are not fixed post-release, so ghcup maintainer have to do it
  • bindist fixes are often not backported, because they may require changes to CI, so ghcup maintainers have to do it (like the missing manpage issue that I’m still fixing manually for every single release pre-9.10)
  • the build system has to be considered an end user interface
  • there is no proper release communication… not with CLC, not with core library maintainers, not with ghcup maintainers

I am done having these discussions. Whatever I bring up is seen as an isolated bug… but sometimes bugs are a manifestation of:

  • priorities
  • communication style
  • processes

Release engineering has to answer the question: what invariants do we want to maintain?

I don’t think anyone can answer this question right now regarding GHC. I’m sure someone will bring up:

  • lack of funding
  • just become a contributor

No, I think the priorities need to change.

8 Likes