Priorities for upcoming GHC releases

Source-only downloads is one option - “early adopters” can build those for themselves which would help to detect more bugs earlier, both in the tools themselves and their build systems.

1 Like

At the risk of adding more noise into the discussion: I think it’s clear that some communication of concerns will go a long way.

In particular, GHC did not always have the release cycle it has now. (I’m a big fan of the Chesterton’s Fence parable):

  • What were the motivations/concerns for the current approach?
  • Were those concerns addressed?
  • What aspects of those concerns would be made more dire by switching to a longer release cycle?

As we all know, the amount of work necessary tends to fill the time allotted, so there’s some benefit to setting a faster pace. At the same time, maybe things have changed enough that it’s time to revisit.

For folks wanting more consistent nightlies, what is the current situation preventing you from accomplishing? Of course having more intermediate states to test is useful, but why stop at nightly? Why not 3 times a day, or hourly! More seriously, on the spectrum of “every commit” to “every major release”, why is nightly the “right” frequency for GHC?

My understanding is that the Haskell community simply does not have the available resources to consistently produce nightly releases. If resources are to be redirected to accomplish this, it would need to be because there is a very deep/costly concern.

My proposal: while discussion here is good and worthwhile, I think it’s time for us to thinking about writing down these concerns explicitly. There’s a lot of “ideally we would do X”, which is a good way of setting targets to aim for, but not a good way to determine what actions to perform tomorrow.

We just launched blog.haskell.org for exactly this sort of ecosystem-wide communication from core teams. I’ll happily work with folks to write this stuff down without getting bogged down by the sometimes-heated discussion. It may be that they are written down somewhere already. If they’re still up to date, great! If they aren’t up to date, let’s update them.

5 Likes

One nice thing about the rolling releases is that they are some semblance of “official” versions. So people can release to Hackage a library against a newer GHC if they are into experimenting. Feels like you kinda lose that with nightlies/source only but maybe I’m wrong there.

1 Like

It’s worth pointing out the plan for the current release priorities and schedule is the result of a long history and not simply the result of the GHC Team picking a convenient schedule. My impression is that it was set with the goal of settling on something that works with the resources available while still serving users well.

To make this more obvious it might be helpful to go into some of the history:

The first major event I witnessed in this regard was this blog post by ben in 2017: Reflections on GHC's release schedule — The Glasgow Haskell Compiler which first suggested the current goal of a 6 month cadence.

As I remember it not everyone was in favor of faster releases but the majority of user feedback was positive about a faster release cycle. Discussions obviously happened in many places but the ones still easily accessible seem to confirm this with the mailing list and reddit being overwhelmingly positive about these plans at the time.

A few years later the topic of lengthening the cycle to yearly was brought up on the ghc issue tracker. In my opinion this ticket does a stellar job outlining some benefits of a longer schedule, and surveying other projects. But ultimately the comments were mixed with comments both for and against longer cycles and it was ultimately closed by the contributor who opened it without much happening.

Yet another two years later ben yet again set out to gather feedback from users with the idea for a tick-tock release cycle. Which got mixed feedback, and with interest fading it was eventually abandoned.

This isn’t a complete history but even this should make clear that the current schedule wasn’t arrived at without reflection or consideration of the trade offs involved.


What does that tell us about the reasons for the current cadence

I would summarize the motivations for the current cadence as I perceive them as such:

  • Releases with a year or more between major releases were seemingly not very popular in the past. (Based on the reactions to the 2017 blog post and change of schedule).
  • Why exactly this was the case isn’t easily determined, but after reading through the things linked above as well as some more it seems to be a mix of:
    • A large delay between feature work and it being available to users.
    • Major updates often causing large amounts of breakage.
    • Users locked into boot library versions often had to wait rather long for bug fixes.
  • For GHC contributors there were additional drawbacks:
    • Large lag between changes in GHC and feedback (for features and bugs both).
    • It caused a “mad rush” period in GHC before a release where contributors frantically tried to get their feature merged before the release window.

I don’t think the faster cadence inherently improved “breakage per time” people experienced. But I would say it improved all other aspects of the points mentioned above. However there are also drawbacks. Not only is this discourse thread a witness to that I will also point to the ghc ticket about doing an anual release cycle again which highlights the perspective of the people who bear increased load because of the faster release cadence.

What aspects of those concerns would be made more dire by switching to a longer release cycle?

I would expect all of the above points except for breakage to be made more dire to some extent. However this does not really address the benefits of a longer cycle. Merely that, as with many things, there is no free lunch when it comes to picking a release cadence.

14 Likes

I would think the GHC issue tracker would be the right way to approach this. But if you believe this should be handled by GHCHQ there are instructions about how to bring up issues on the wiki

2 Likes

Nightlies were an initiative by the Haskell Foundation.

The original document is here: GHC Nightlies - Google Docs

There’s various reasons why they have been inconsistent, but none of them look particularly resource intensive to me (also see the calculation in the original document wrt “Bindist Retention”… this calculation works well for nightlies, but not for “every commit”):

I consider it a design issue in CI.

Here’s the stats of nightlies availability: Grafana


The reason this matters is because it’s not an orthogonal project. It has a direct impact on release matters as a whole and addresses those specific points raised by @AndreasPK about contributors getting nervous to have their changes in a release proper.

So they need to be part of the release discussion.

1 Like

In case the leading phrase “This document is not a final plan” in GHC Nightlies didn’t make it clear enough:

  • GHC Status · Wiki · Glasgow Haskell Compiler / GHC · GitLab

Those issues seem “particularly resource intensive” to resolve (at least to to me). But if you really do “consider it a design issue in CI”, I’m reasonably sure chreekat will be grateful for any help you can provide with gitlab.haskell.org's CI infrastructure…

Series Most recent release Next planned release Status
Nightlies N/A N/A See Section 2 below
9.12 None %9.12.1 (#25123) Next major series
9.10 %9.10.1 %9.10.2 (#24374) :large_blue_circle: Current major series
9.8 %9.8.2 %9.8.3 :green_circle: Stable
9.6 %9.6.5 %9.6.6 :green_circle: Stable
9.4 %“9.4.8” None :green_circle: Stable but no further releases planned
9.2 %9.2.8 None :yellow_circle: Stable but no further releases planned
9.0 %9.0.2 None :red_circle: Not recommended for use; no further releases planned
8.10 %8.10.7 None :red_circle: Not recommended for use; no further releases planned

What I mean is:

  • latest stable: 9.10
  • previous stable: 9.8
  • old stable: 9.6

Anything after that doesn’t receive releases anymore.

In the above case and the current situation, I would argue that 9.6 is an LTS candidate and 9.8 releases can be stopped after 9.12 is released. Then you have 3 supported releases: 9.12, 9.10 and 9.6.

I’m aware. That’s why I’m not sure whether it can be done with the current resource constraints, even if we drop one branch.

As I pointed out above, I don’t believe in static policies. The version numbers mean nothing for the end user. Picking every 3rd release or so as LTS is not very interesting, imo, and would mean that distributors and users will have to do their own evaluation anyway whether a specific GHC release is really high quality. Otherwise “LTS” just becomes a badge “we backport here, but it may not be a great release”. This will have to be based on open discourse.

Those are good concerns. But I don’t think we will be able to get it right from the start. And I believe making good decisions about release quality is more important than keeping a specific pace.

I see LTS as an additional service. If it takes off, I would expect it to have positive effects on GHC development itself, helping developers get a better sense of what end users expectations about release qualities are. And once there’s better alignment, maybe it will be possible to set more specific time expectations for LTS releases.

6 Likes

It was linked above but I do want to direct people again to the “tick-tock” release proposal, which I think had a very good motivation and a very rich discussion:

It was closed because the discussion was too diffuse and ghchq didn’t have time to push things towards a consensus. However, people should familiarize themselves with it and perhaps consider a proposal based on it.

That said, if ghchq is open to the “first two and LTS” plan that hasufell proposed above, that seems a much more lightweight change to current practices which might bring significant benefits. Perhaps it should be written up as a tech proposal, just as tick-tock was?

3 Likes

Rather late reply - I held back from commenting in September.

I feel the current multiple overlapping semi-actively maintained stable releases approach seems unsustainable both for upstream and downstream/users (agreeing with @Bodigrim). My suggestion would be for GHC development to move a bit closer to the development models of Rust and Lean4 with time-based minor releases (perhaps bimonthly or every 10 weeks?) which can include feature drops too (but no major breaking changes). Obviously there would still be the development/nightly branch for bleeding edge, from which the next major version can be cut eventually with some cadence (maybe annually or less often - when it is “ready”?). ie Basically only one actively maintained release branch. Hopefully this would also help with the nightly snapshot efforts. It also makes ghc minor releases much more predictable for users, and should increase adoption of the latest stable major versions. On the downside one could expect a bit more cumulative breakage when moving to new major version than now, though it should be kept to a minimum - perhaps incubating major breaking changes longer in the development branch.

Let’s call the stable major version 10, then once 10.x stabilises Stackage LTS would move to ghc-10 and follow its regular minor updates. Perhaps we could test early 10.(x+1) releases first in Stackage Nightly and then promote them to Stackage LTS. Once ghc 11 is released, updates of ghc 10.x would stop completely and it would be considered old stable. ghc-11.1 would first appear in Stackage Nightly and then get promoted to LTS once ghc-11.x is considered stable and adopted enough, etc. Well in principle some short release overlap could be allowed too, but any updates to ghc-10 after ghc-11 would only be bugfix backports already released on the ghc-11 branch: that period would be slightly closer to the current model but with only 2 maintained release branches for a shorter time.

Obviously this would be a major change to ghc development and there are many more details to consider, but I wanted to write down this sketch of the idea and share it here.

Summary of releases in the last ghc9 years: (revised since the wiki page is missing more recent minor releases for 9.2 and 9.4)
2021: 9.0, 9.2 and 5 minor releases for 8.10 and 9.0
2022: 9.4 and 7 minor releases for 9.2 and 9.4
2023: 9.6, 9.8 and 9 minor releases for 9.4, 9.2, 9.6
2024: 9.10, (9.12) and 5 minor releases for 9.6 and 9.8 (to date)

(based on this sheet of ghc releases:
GHC releases - Google Sheets chronologically sorted from Index of /ghc/ and version history · Wiki · Glasgow Haskell Compiler / GHC · GitLab).

5 Likes

Please no.

The quality of the releases don’t have the required level for this approach.

I’d be quitting my work as GHCup maintainer if I had to fix and manage semi-broken bindists every 10 weeks.

GHC HQ does not prioritize distribution quality and I have tried the last 2 years to convince them otherwise:

  • test bindists don’t work well and are not tested (GHC CI setup doesn’t decouple building and testing properly)… so ghcup test is also broken
  • bindist issues are not fixed post-release, so ghcup maintainer have to do it
  • bindist fixes are often not backported, because they may require changes to CI, so ghcup maintainers have to do it (like the missing manpage issue that I’m still fixing manually for every single release pre-9.10)
  • the build system has to be considered an end user interface
  • there is no proper release communication… not with CLC, not with core library maintainers, not with ghcup maintainers

I am done having these discussions. Whatever I bring up is seen as an isolated bug… but sometimes bugs are a manifestation of:

  • priorities
  • communication style
  • processes

Release engineering has to answer the question: what invariants do we want to maintain?

I don’t think anyone can answer this question right now regarding GHC. I’m sure someone will bring up:

  • lack of funding
  • just become a contributor

No, I think the priorities need to change.

9 Likes

I don’t get that. What would that solve? More minor features would be available faster?

Given how absolutely HUGE .stack / .cabal / /nix caches are even with a few active compilers I dread the increased churn of “no, please you MY ghc/stackage version so I can share build results I’ve accumulated by now”.
If everyone would use every damn minor out there I gonna need a 640TB drive to avoid monkeypatching their projects.

3 Likes

Minor releases are supposed to be backward API compatible, no?
If everyone were using the same longer-lived major version (or two) that should reduce the size of required caches overall considerably and the wider maintenance burden of the Haskell community.
Currently I build my projects for every single ghc9 major version as far as possible - it is quite ridiculous.
As I pointed out there have already been 6 releases overall this year and we are not even done yet: we are about to see 9.12.1, and there hasn’t even been a single minor update pushed for 9.10 yet… I think development is stretched over too many major versions for everyone.

(Anyway it’s good you reminded me to run stack-clean-old keep-minor again. :- )

…and since 2024 May 22 (i.e. the last six months)? A thread which seems to defy an mutually-agreeable conclusion.

With a view to the approaching proselytised (northern hemisphere) winter-solstice festivities, here’s one way we can all have some time to perhaps contemplate a better future for Haskell - shut down all the Haskell servers across December (and just have a helpful "back next year!" message to alleviate any confusion). Everyone can then just enjoy their time with the people they care about (and chreekat can have some more time away from “the s**t mines” ), to return with new-year’s energy and enthusiasm and solve this and other ongoing problems!

1 Like

Yes, but this is not the case.

Only after the base/ghc split has borne fruit can we start assuming some stability between minor versions of base.

There has already been some improvement, at least. The actual breaking changes are happening in ghc-internal, which now gets a major version bump with every (minor) version bump of GHC. Thus it follows PVP. Previously, the breaking changes were in the ghc library, which did not follow PVP.

But base still re-exports (nearly) everything from ghc-internal, so breaking changes in the latter affect the former. But the path to a non-breaking release of GHC is now visible.

It looks like the current status for this process is to wait on @bgamari to make recommendations for deprecating unstable modules.

4 Likes

Just to add to this. Another area that needs some work for the split is tooling support. How will cabal and stack(age) handle this change?

I have a tracking issue (if there are other issues please link them!) for cabal here: Tracking issue: easily reinstallable boot libraries · Issue #10440 · haskell/cabal · GitHub

1 Like

Hi, I’m interested in contributing directly to GHC for its upcoming release, and I’d love to explore ways to incorporate advanced mathematical or combinatorial techniques into the compiler. Currently studying Master’s in mathematics and computer science, and I’m especially keen on areas where GHC could benefit from optimizations, formal methods, or type-driven improvements that support complex computations or provide enhanced guarantees at compile time.

Additionally, if there are ongoing projects related to combinatorial optimization or type system improvements, I’d love to get involved!

4 Likes

I don’t quite understand what you mean by this.

  • ghc-internal follows PVP (this was negotiated and agreed on by GHC HQ)
  • ghc-internal is an implementation detail of base… anything that affects base API, no matter where the implementation lives… has to go through a CLC proposal
  • base is supposed to be PVP compliant today… if it’s not, then that’s a bug
1 Like

For the curious, here’s the proposal that encodes the contract between GHC HQ and the CLC: tech-proposals/proposals/accepted/051-ghc-base-libraries.rst at main · haskellfoundation/tech-proposals · GitHub

I think this means we must be careful changing functions in ghc-internal that are re-exported by base because these are subject to the versioning of base. If we introduce a breaking change in ghc-internal (which is OK, because we are very likely to make a major bump in every GHC release), we must also bump the major version of base if that function is re-exported and the change signed off by the CLC. I think that is what Bryan meant.

However, the proposal does not constrain what changes GHC devs do to ghc-internal (which increments majorly all the time). In particular, it does not mean that the API exported by ghc-internal is directly governed by the CLC unless it touches the API of base.

5 Likes

You’re right. I get my terminology mixed up. And I don’t seem to understand the goals of the GHC/base split as well as I thought I did.

I suppose this withdrawn CLC proposal is an example of a positive outcome of the split. Something that would have gone into base, and would have required the full CLC proposal process and the introduction of a breaking change, went into a separate library altogether. So the benefit is that fewer breaking changes go into base, and thus base has less churn. Some day this could mean that the difference of base versions between GHC x.y and GHC x.(y+1) could even just be a minor version.