Why does ghc and base have to change in incompatible ways every release? Maybe limit those changes to every other release.
The checklist above is for major releases, sorry for confusion.
Or better donāt even waste time on intermediate releases. Anyone interested to follow bleeding edge developments can build GHC from sources.
I mean, we are still waiting for the final release in GHC 9.6 series. There was no 9.10.2 yet. There is no release in GHC 9.12 series without critical correctness issues (#25784). Yet GHC 9.14 is expected to be forked next month. Who exactly benefits from such cadence?
Isnāt that what a major release is, by definition? Otherwise we could just keep releasing GHC x.y.(n+k) indefinitely.
Letās assume itās just a major GHC version, but all the core libraries, including base only have minor bumpsā¦ what do you need to do?
My guess is nothing.
On the other hand, if there are major bumps in core libraries, but nothing else, we could definitely automate part of that process on hackage:
- relax upper bounds to match the shipped versions of the new GHC and build the package in CI
- on success, create a draft hackage revision that the maintainer can just accept with a click
This is technically possible, although there are a lot of details to consider.
So now my question isā¦ could CLC simply demand that there is only one major base bump per year? GHC can still pump out two major releases, but the end user experience would be different.
Just to clarify, I would expect the next major release to be 10.1.1 (not 9.14.1) but I guess I am wrong and that youāll have to go through the checklist for 9.14 (which is one is correct?)
It doesnāt much. Since 9.8 there have been few incompatible changes in GHC, base
and boot libraries. In fact in 9.12 almost nothing has been reported (although I suspect 9.12 is little-used).
Thanks to the work of the HF Stability Working Group for this.
You can see my breakage inventories at GitHub - tomjaguarpaw/tilapia: Improving all Haskell's programmer interfaces
Well, me for one. Iām looking forward to getting my hands on what will be in 9.14! Of course, I could just build my own, or, more simple for me, use a nightly build. Whether the existing cadence is better overall for the ecosystem, well, thatās a good question. I donāt know.
I like this idea (but indeed there are lots of details to consider, and lots of work to be done).
This seems plausible.
My view continues to me: we should very strongly discourage breaking changes, more strongly for things deeper in the dependency tree, so GHC and base
should be discouraged from breaking the most strongly.
You are much more diligent than me! I think I save time compared to you by being more lax. My workflow is more like
Change to the package directory on my system, because I already have them al cloned.git clone
a package.git fetch
to ensure Iām up to date withmaster
- Build it with a new GHC, figure out any
allow-newer
andnecessary. I generally donāt bother usingsource-repository-package
allow-newer
because I donāt feel the need to update my package if my dependencies are not yet updated, but sometimes itās worth getting ahead of the game. I almost never go as far assource-repository-package
. - Test with a new GHC, especially
doctest
. Fix any discrepancies, hopefully minor. Benchmark with a new GHC, compare against old GHC.I donāt bother benchmarking. I can get away with this because Iām not maintain critical, performance-sensitive packages liketext
andbytestring
Fix any new GHC warnings.I donāt bother fixing warningsFix anyI donāt bother fixing warningscabal check
warnings.- Bump dependency versions.
Update changelog.I generally donāt bother updating the changelog if all I did wasRegenerate CI.I donāt regenerate CI, I just add the new compiler to GitHub Actionāsci.yaml
that useshaskell-actions
. (Iām not sure what CI setup requires regeneration.)- Push and wait for CI to complete. Review results.
Raise pull request, request reviews.I typically maintain packages where I can unilaterally update to the latest GHC release without anyone elseās authorization.Next,I didnāt understand this part. Doesnāt it imply a Hackage release has already been made?cabal get
the released version of the package.Bump dependencies, compile, test and benchmark it.Seems like this has been done twice now, locally and in CI, so I wouldnāt bother doing this a third time.- Decide whether a Hackage revision is enough or do we need to go for a new release.
So I suspect my life is easier than @Bodigrimās in this regard because I maintain fewer critical performance-sensitive libraries. So thanks very much to @Bodigrim for doing this (and to everyone else who does so too).
But enough to require attention. Could those few incompatible changes have not waited 10.x ?
I wanted to link this previous discussion about release cadence in case anyone reading this thread hasnāt seen that one: Priorities for upcoming GHC releases - #46 by Bodigrim
GitHub - haskell-CI/haskell-ci: Scripts and instructions for using CI services (e.g. Travis CI or Appveyor) with multiple GHC configurations generates the github actions files from the cabal files in the project. Eg, you add a new tested-with compiler, regenerate CI, and then thatās added to your workflows.
This conversation makes me think that we need to do a better job of catching performance regressions in fundamental libraries like text
and bytestring
before a GHC is released. Perhaps we could teach head.hackage
to run and compare benchmark results.
Just commenting on this particular technical aspect, since about 2017 itās been really trivial to add a so-called extra-dep to your stack.yaml file which essentially exists to use a newer or modified version of any given package and stack is happy with that. This is either a different version or a git repository at a given commit. To support exactly this workflow which happens a lot at companies where you need a new version of a library that fixes some kind of bug or has a new feature that would really benefit your project.
Finally, there is some kind of quarterly stack snapshot bump and then you are usually but not always able to delete some of those lines.
This is a natively supported built-in feature of stack and takes one line, in Nix itās slightly more involved or not supported, but every company has a slightly different Nix configuration, and set of of tools. Itās hard to speak for all cases.
Itās not about changing lower bounds, but about expectations of others if lower bounds stay with very old unsupported versions of GHC.
This is same in any sw development. You need actively set lowest supported version if you donāt want be overwhelmed with support requests/bug reports for things you donāt have power and time to support.
Of course things can change when are money included and many OSS companies live from supporting really old codebases which upstream havenāt power and time to support.
Hereās the software you might want to use for that: GitHub - IPDSnelting/velcom: Continuous benchmarking
Example instance VelCom (although this one is barely functional because the instance is so large)
My experience is that most of the time itās not about base
, but about template-haskell
, ghc
-the-library, Cabal
-the-library and such.
Could you name anything specific?
I find it easier to persevere and do it all at once. The alternative of āif dependencies do not compile, make a note in your calendar to check it next weekā means way too many context switches for me.
No release has been done so far: we just brought Git HEAD in line with the latest GHC. Now itās time to assess whether we need a new release or the latest Hackage upload (which could be far behind Git HEAD) still compiles.
That was attempted in !271: Run benchmarks for bytestring/text/containers packages Ā· Merge requests Ā· Glasgow Haskell Compiler / head.hackage Ā· GitLab, it would be great to land it.