Language, library, and compiler stability (moved from GHC 9.6 Migration guide)

Thank you @bgamari for posting this, and those involved in curating it.

I would however be lying if I didn’t say I’m fairly disappointed by this. Unless I’m mistaken this document tells me GHC 9.6 will reject code a compiler from ~6mo ago (9.4) still accepted without warning. This again means if any code in any dependency has to be adjusted for any of these changes, it will cascade throughout the whole dependency tree and codebase.

I am not against progress. But the compiler should warn for at least two versions[*] before outright rejecting code. I fail to understand how we can require code level migration for each release. Just to make the code compile. If code compiles with version N, warning free, it should compile with version N+2 as well, potentially with warnings.


[*]: preferably four, at a cadence of 6mo per release, but I’d be happy with two already!

7 Likes

I don’t follow. Other compilers do exactly that. This issue https://github.com/rust-lang/rust/issues/41620 is one of many reads about how e.g. rust handles backwards compatibility breaking changes (and you could argue there as well that it’s a bug).

Complaining about resources is welcome, but I’m not sure it’s enough of an argument. It’s about goals. The GHC team could very well declare this as a goal and then talk to industry backers, contributors and HF whether there will be financial or manpower funding.

But it does not appear to be a goal, at least according to the medium-term priorities. In section “Insufficient Resources” it’s not listed. In the section “Breaking Changes” it’s said that those are the responsibility of the GHC Steering Committee. But I don’t think the GHC Steering Committee rules over release management and deprecation cycles, or does it?

I think this requires more clarity.

6 Likes

It depends on the nature of the problem.

Looking through Rust compiler release notes, I found several examples of breaking bugfixes without any intermediate releases introducing compatibility warnings, so it’s not as simple as “Other compilers do exactly that”.

Sometimes, it makes sense to introduce a warning and deprecate the old behaviour. Other times, the whole codepath is overhauled, and it isn’t technically feasible to keep around the old code path too. Especially when it is plain wrong and buggy #2595 #3632 #10808 #10856 #16501 #18311 #21158 #21289.

11 Likes

Thanks Ben, and everyone involved, for the initiative of creating a migration guide. Where there are breaking changes it is extremely helpful to have authoritative source that details how to address them. I’m looking forward to using 9.6!

I sympathise with those who are disappointed that there are breaking changes at all, but I’m glad this issue is being talked about more and more, and increasing effort is being put into mitigation. I believe we’re making big strides in the correct direction.

I would also encourage anyone to whom the issue of stability is important to participate in the Stability Working Group. The SWG has made great progress in advocating for stability in various corners of the Haskell ecosystem, since it was established a year or so ago.

10 Likes

Could someone provide a list of those “corners” where the SWG have been successful? Having briefly looked through the minutes of their last four meetings (to 2023-01-09), there seems to be a lot of items labelled “No progress”.

Otherwise the advice to “just participate” in the SWG as a response to a complaint about continual Glasgow-Haskell breakage seems “diversionary”, as people often have other responsibilities and therefore will only be intermittently able to dedicate time to the suggested actions - for example:

…which lead to this response:

Not only would a “list of accomplishments” increase the credibility of such suggestions, in this case it would help to promote the SWG’s activities - by reading through it, people may acquire an interest in participating all by themselves: no suggesting required…

1 Like

I think this comment is immensely disrespectful and reflects a lack of first-hand experience in any form of long-running working collective, and I wish you would knock this sort of pointless negativity off.

I look at the same minutes of e.g. the most recent meeting and see a number of items marked “slow progress” and others with some progress and others with deferrals. But I also see a ton of items, and work being done by typically busy people. Many of the items are quite large and take time, and people with other responsibilities will typically only be intermittently able to dedicate time to them. So I see a working group actually working, at a deliberate pace.

I fail to see the purpose of negative comments such as the above, except to demoralize and make feel unappreciated the people actively engaged in trying to accomplish the work that you claim you care about. If you feel things aren’t moving on particular issues at the pace you would like, then please try to figure out how to be a positive force – don’t just throw rocks at the people trying to help!

(Edit: This comment may now seem out of place because atravers edited their above comment. I’m leaving it nonetheless, because I think we all-too often have discussions of this form. Vague complaints about others not doing enough, and when people point out what is being done, further complaints about how the same people struggling to find time to do things also don’t have time to write them up as clearly as the poster would like, and then further complaints when they are written up that they aren’t summarized still further, etc. None of this low effort complain-posting helps in isolating what problems need to be solved in any specific detail, or in taking concrete steps to solve them. It just makes everyone feel crummy, to no good end. I’ve seen too many helpful and productive volunteers demoralized by this sort of thing, and I again urge people to carefully consider before they post if their comment will help drive our common work forward or just undercut morale.)

21 Likes

Sure, but let’s not pretend we’re anywhere near what rustc does:

  1. there is a very strict 3-steps procedure wrt breaking changes that can take years
    • introduce a lint
    • make the lint deny by default
    • make it a hard error
  2. breaking changes are widely discussed in the community before they’re introduced via the above procedure (also may take years)
  3. there’s a strong motivation by the compiler team to keep backwards compatibility

I don’t think we have any of those points implemented. That was my point, not whether the current breakage in 9.6 would require a deprecation procedure or not.

This difference is very stark and should serve as a reference point when discussing diverging goals in our community.

5 Likes

Those points seem to only be true for stable rust. This stackoverflow answer says Rust does allow sudden breaking changes in experimental features:

Things are different for the unstable features that require nightly, though. The semantics can change, and code that once worked may fail to compile more or less suddenly;

I think at least GADT records should be considered an unstable feature in GHC. So I don’t think that change should require a long transition process.

In my opinion the real problem is that GHC does not clearly list which parts of the compiler/language are stable and which are not.

6 Likes

:100:


If these features were behind an -fexperimental flag (or require even a different compiler), I doubt there would be anyone having issues with those features being iterated on. I doubt anyone would expect their code that compiled with -fexperimental to compile with a newer compiler.

@jaror if GADTs are experimental? What other current GHC features would you consider experimental? Anything outside of Haskell2010?

1 Like

I’m not @jaror , but I would consider anything outside of Haskell2010 experimental — pretty much by definition.

1 Like

Absolutely, but spelling that out doesn’t really help anyone using Haskell today.

Rust stable is usable and supported by the entire ecosystem. You don’t need rust nightly. Haskell2010 is not usable for industry grade software. Not even for hobby projects.

This issue is not blocked by GHC nightlies, the GHC steering commitee, nor the lack of a new standard and not even the base split. This is entirely a matter of GHC development, its goals and resources.

2 Likes

Huh? In what way “not usable”? Admittedly I’m using it only for hobby projects, but I’ve not noticed instability.

This whole thread is making me nostalgic: the language GHC supports teeters continually on the border between type-safe vs undecidable/incoherent. (In the usual English sense, not the extensions including those words.) As each enhancement/feature is developed, it’s sometimes found that a feature (or more likely a combination of features) leads to unsoundness.

What I used to love about GHC was as soon as that was discovered, the loophole was closed at the next release with extreme prejudice. (OverlappingInstances + FunctionalDependencies + UndecidableInstances + orphan instances was a prolific source of such nasties.)

I wouldn’t have dreamed of complaining that my dodgy exploit – despite it ‘working’ for me – should be allowed to persist for ever. Please GHC protect me from dodginess, for I know not what I have done. Please just go back to rescuing me from myself. All I ask is you document it in the release notes – which are indeed getting a lot better.

4 Likes

Regarding the “different compiler”: there is (at least) one historical precedent for this - the EGCS fork (of GCC 2.x).

One obvious difference is the that the mode of development has changed drastically - rather than just forking GHC to e.g. “EGHS”, other parts of the Haskell ecosystem may have to be “split off” as well…it may no longer be a viable option, considering how integrated modern development environments now are.

To me the rust nightly, beta, and stable are different compilers; I don’t think we need to go full fork. We already have this though, debug, profiled, and others are different compilers. Instead of an -fexperimental runtime flag, there could just be a different built with e.g. -DEXPERIMENTAL compile time flag; that compiler might not understand the same flags as the non-experimental compiler, know about more language features, …

I hope this cleared up any confusion as to what I meant.

1 Like

You wouldn’t be able to compile 95% of hackage.

2 Likes

It’s deprecated, but GHC still supports the -fglasgow-exts flag - maybe it could be remade into -fexperimental or -DEXPERIMENTAL

To elaborate on the Stability Working Group, I would say its most important function is facilitating communication and sharing information related to breaking changes, such as the cost of breaking changes, potential upcoming breaking changes and how we can prevent them or mitigate them. Representatives of GHC, Cabal and Stack attend.

I do not see the SWG (despite its name) as a way of getting work done. It’s not a group where we dole out work, perform it, come back next time for more, and publish a record of results. I wouldn’t expect the explicit lists of “tasks accomplished” to be particularly long. It’s one of those things where, if you’re doing your job right, no one knows you’ve done anything at all.

As someone who feels strongly about reducing the frequency and severity of breaking changes in the Haskell ecosystem, I would encourage like-minded others to join, because it’s an effective way of helping maintainers of critical ecosystem projects to learn about the costs of breakage and give them information that can help them reduce or mitigate it.

4 Likes

I would like to second @tomjaguarpaw here. The SWG has been a remarkably useful body thusfar due to the discussions that it has fostered that otherwise likely would not have happened. I appreciate each of the contributors who take time out of their day every two weeks to reflect on the status quo, how it may be improved, and work towards concrete solutions, even if progress may appear slow. The best way to change this is to come and contribute; we are all busy but many hands makes light work.

6 Likes

We took a first step in this direction in GHC #21475, which fell out of an SWG meeting and was implemented by @telser, the SWG chair. We discussed distinguishing “stable” from “less stable” extensions but ultimately were reluctant to do so as we struggled to find a definition of “stable” which would be both useful and accurate.

To pick a particular example, extensions like TypeFamilies are quite tricky; they are quite useful and therefore widely relied upon. However, they have no defined operational semantics; sadly, changes in these semantics can affect end-user programs. Sometimes these changes are merely reflected in compilation speed; more rarely they can change reduction before (e.g. where “stuck” families or UndecideableInstances are involved). The users guide is quite up-front about this semantic gap: there is an entire section explaining that reduction is driven heuristically.

Does this make TypeFamilies “unstable”? One could argue that the answer is “yes” and that it won’t be stable until we have a comprehensive formal definition of the extension’s behavior, both semantic and operational. However, this is probably okay: in most cases the utility of what we have likely outweighs the potential negative impact of such “instability”. Consequently, it’s hard to see the value in stamping an “unstable” label on the extension given how underdefined that term is.

TypeFamilies are merely one case of this. GHC is a composition of many research insights (things we Haskellers typically call “language extensions”) by many people. Consequently, we often find ourselves at the edge of human-kind’s understanding of our craft. Sometimes we are aware that there are things we don’t know; in other cases we don’t even know what we don’t know. This is why we are reluctant to call “extensions” stable unless we have a fairly comprehensive theory (semantic and operational) accounting for the extension and its interactions with related extensions.

9 Likes

For the record these posts were originally made on the GHC 9.6 migration guide thread and have been moved here as they discuss stability more generally.