Yes, it definitely is. I’m not trying to make a concrete proposal, just advocate for being open to the idea of increased release frequency, even though from the point of view of the status quo it seems impossible.
If the cadence was successfully increased (i.e. everything was adapted to work at the faster rate), won’t that just increase, or even encourage the appearance of even more breaking extensions and features?
If so, would increasing the cadence again still be considered an appropriate solution?
@tomjaguarpaw Thanks for the great links. I can not agree with what you are saying strongly enough.
@atravers no it would not. The problem today is that if ever infrequent release is jarring, “what’s one more breaking change?” Thing just slide in underneath the radar. With frequent releases we notice each indvidual breaking item as it happens.
It’s a lot like the Nyquist–Shannon sampling theorem - Wikipedia. Releases are samples; the breaking change rate is the highest frequency harmonic. Once we release enough, we can actually observe the causes of breakage, and adjust our policies accordingly. Right now, they are shrouded in low resolution.
Alternatively, the rate of breaking change could be lowered to the point where releases are then valid “samples”.
It’s not impossible at all. But the discussion seems very one-sided, because you’re not talking about who will do the work.
Will you volunteer to do the distribution side of things, including GHCup? What about hackage trustees? Have you asked HLS developers? Are you going to take over the constant porting work and release management?
So despite lacking a proposal, the bigger problem is that I’m unsure whether it has been considered who will do the work.
Releases are not just “there to use”. There’s integration work to be done. Lots of this integration work can be neglected if we’re talking about nightlies. But that’s not what we’re talking about.
I think anyone who’s done distribution work for quite a while is getting the chills from this idea and you won’t win them over with suggesting to be open minded.
You’re misunderstanding me. There’s no reason for anyone to get “chills” from what I’m saying. I’m not requesting anyone to do additional work.
To reiterate: my goal in talking about increased released frequency is to improve our system so we get more value for less work. Anyone who fears I’m going to ask them to do more work has got things completely backwards. Quoting myself:
my motivation for proposing more frequent releases is to decrease costs on the rest of the ecosystem
(including costs on GHCup, Hackage trustees, HLS developers, release managers)
To take a random analogy, the shift from bespoke, hand-crafted mechanical watches to digital watches simultaneously improved on all the following axes:
- reliability
- quality of time keeping
- cost (including labour cost)
- speed of manufacture
- speed of delivery to the final consumer
That’s what I mean when I say we need to be open to faster releases. “Faster releases” by themselves won’t improve anything. But faster releases can be one of many factors that go in to overall system improvement (including reducing costs on you, Julian).
I don’t think I am.
I think you’re simply mistaken that it will decrease my workload or that of others.
Your analogy about watches isn’t very convincing either.
This has definitely my -1 for the time being and a quite strong one.
OK, that’s fine. I have plenty more advocacy work to do.
Two quick questions:
-
How is it that more releases would result in less breakage? More concretely, if changes X,Y,Z are introduced, the end result is the same amount of breakage, regardless of the number of steps in between? Or is the plan to change the nature of the work being undertaken, and not only the cadence?
-
Currently, are GHC releases held back at release candidate stage until they have tooling support (as far as, say, HLS), notwithstanding issues like 9.2.5? Is this something that could be done?
I believe those two things are orthogonal. The real question is how to have higher quality releases. Higher release frequency CAN improve that if the bottleneck for release quality is the end-user feedback loop. I don’t think that’s the case for GHC at all.
This has partly been communicated in Towards a better end-user experience in tooling · Issue #48 · haskellfoundation/tech-proposals · GitHub
Wrt HLS, this won’t be easily possible, because supporting newer GHC versions may need actual code changes. I don’t think we can make GHC wait for this. For point releases that change no API (given that’s actually true) this could be done, given that GHC trusts the HLS developers to be swift on that.
We need to reduce the costs of making a new release. Why isn’t it fully automated? It should be a button press away, and we should ensure that every commit is release-quality using CI.
If anyone wants more frequent releases, start there.
There are a few things going on here. To zoom out a bit and make a high-level summary:
My main interest is to improve overall Haskell ecosystem quality. I have noticed from many esteemed individuals over the last 12 months or so that there is an assumption that slowing down GHC releases is essential to improving quality, because that way there will be less breakage to deal with. Whilst I sympathise with the frustrations of dealing with breakage[1] I find this point of view mistaken. The correct way to measure breakage is not in terms of releases frequency but in terms of breakage per unit time! That is to say, GHC could release daily but never break[2]. I urge people to disentangle in their minds the concepts of “frequency of release” and “breakage per unit time”. They need not be strongly connected.
Before we get to frequency, I want to make the case that decreased release latency would definitely help mitigate the costs of breakage. Consider this: the earliest commit that goes into a GHC release is made roughly 9 months before the release[3]. Many companies and individuals are waiting 12-24 months before upgrading to a new version of GHC, because their entire suite of package dependencies also has to catch up, amongst other reasons. Let’s say that, conservatively, a lot of GHC commits are waiting at least 24 months to get into the hands of users. There’s a lot of latency in that pipeline! It’s very easy for breakage to go completely unaddressed because so much additional work has been done on top of the breaking change in the intervening two years, and it’s become too hard to do anything about it.
Consider an alternative fantasy world: commits get into the hands of users within 24 hours. Then users are aware of breakage much more quickly, can notify upstream about it much more quickly, and there’s a chance the breakage can be reverted or easily mitigated.
A real world example of this is the DeepSubsumption
reintroduction. There was about an 18 month lag between the 9.0.1 that removed deep subsumption, and the compatibility flag being reintroduced in 9.2.4 (even longer than 18 month between the commits that removed it and the compatibility flag). There’s a whole compiler version (9.0 series) that some users can never use! And this is perhaps the best of example of where the costs of GHC breakage were mitigated. Mostly breakage is never satisfactorily dealt with. The ecosystem must simply cope with it, somehow.
Where does frequency of release come in? Well, as touched on above, I think that reduced latency would be an important factor in improving quality. I also think it is unlikely that we can reduce latency without also increasing release frequency. A system gets good at doing what it does, and bad at doing what it doesn’t do. If releases continue to happen at the rate of only two per year then the system will never really have enough opportunity to exercise the parts of itself that need to improve in order to reduce latency. As @int-index says, automation has a vital role to play in this. But something that happens only twice a year can never really satisfactorily be automated. There simply isn’t enough selective pressure to make that automation actually good.
But, I am not saying that increasing frequency of releases is definitely going to improve Haskell ecosystem quality. It might even make it worse. It definitely will make it worse if done in the wrong way. I am not making a proposal of any sort. I am not asking anyone to do any additional work, and there is no need for anyone to feel “chills” about this topic being discussed. Furthermore, I think there is a lot of lower hanging fruit when it comes to quality. I don’t suggest that release frequency should be our top priority.
I am merely inviting people to consider the notion of GHC release frequency from a different perspective. That perspective is only possible once the notion of release frequency has become disentangled from the notion of breakage per unit time. Maybe nothing will come of this line of thinking, but I think it’s interesting and promising, which is why I’m inviting everyone to think it through with me.
[1] and I personally invest time in this, volunteering with the stability working group to determine ways to reduce breakage and mitigate its costs
[2] in an ideal world – in the real world we would fall short, but we could fall much less short that we currently are, in my assessment
[3] certainly more than 6, probably not as much as 12, but I’m not an expert in these matters, so let’s say 9 as a rough figure
Well, I can at least say that I did not suggest this. I think I’ve outlined many times that improving release quality consists of:
- better release coordination
- better communication with stakeholders
- sourcing opinions and providing pre-releases that can be widely tested
- improving the bus factor of GHC/tooling development
GHC developers in fact told me that slowing down releases will be more work for them.
What I wished for are higher quality releases and less of them. There may be arguments why less releases could improve release quality, but that also depends on many factors.
The main reason some of us wish for less releases is not so much the quality, but the toll it takes on tooling. The end user can simply ignore a couple of broken point releases (there are a lot in the 8.10 series).
From the outside these things (tooling) may not look like a lot of work. Or it may seem all this can be easily automated in some way. But that is not the case.
So, I guess we agree we want to improve release quality.
There are ways to solve this: Nightlies. These will only have rudimentary tooling support (e.g. no curated bindists, no prebuilt HLS binaries, etc.).
In an alternative fantasy world… this would have been communicated to the community earlier, because it was known to GHC developers that it’s a (non-trivial) breaking change. A couple of key people would have been enough to get the feedback “no no”. It did not have to get exposed to the entire “world” to figure out it wasn’t a good idea.
This is what I’ve tried to describe earlier in this thread: community management, involving of relevant stakeholders (during development and release), managing expectations, etc.
All of this takes time and effort of course, but that’s the cost you pay for improving release quality.
I’m worried there may even be less communication and more isolation with higher frequency, because you’ll get feedback anyway post release and can just go and revert.
Do we really need to expose the code to ALL users to get feedback? I don’t think so.
Nightlies are a great way to solve this balancing act of differing requirements.
So what’s the main problem with Nightlies? I guess it’s the fact that old code stops compiling with newer compilers all the time and people can’t reasonably test it on a real world project.
So we’re back to the stability and backwards compatibility discussion.
FWIW I was not referring to you.
So, I guess we agree we want to improve release quality.
Yes, I think so.
Nightlies are a great way to solve this balancing act of differing requirements.
Yes, I think so too, as long as there’s a way to consume them that is not significantly harder than acutal releases, otherwise very few people will do it.
Anyway, I take your point of view seriously under advisement. You are an expert in distribution and release management, and I am not.
That is the idea behind the GHC Steering Committee, whose members are meant to represent various interests: education, research, industry. And the simplified subsumption was accepted (#287), so apparently the key people you are referring to have not nominated themselves to join the committee for one reason or another, where their voices would have been heard.
This mistake can be corrected. The committee seeks new members regularly.
Thanks @tomjaguarpaw and @hasufell for your detailed thoughts as always!
On the second point:
- Currently, are GHC releases held back at release candidate stage until they have tooling support (as far as, say, HLS), notwithstanding issues like 9.2.5? Is this something that could be done?
In my view, likely a minority view [edit: maybe not], a working HLS is a prerequisite for a functioning ecosystem to be built around a version of GHC. I agree that it will often (always?) require code changes to HLS to support updated GHCs. However, if that can be done in the space of days, it seems wise from an ecosystem perspective to leave GHC at RC stage (or maybe with some “bleeding-edge” label on it) for a few days until those code changes have been made.
FWIW, I share this view.
That’s also my opinion.
I find this statement a bit sly: there are plenty people in the Haskell community (me included), who bear no interest in programming language design, dependent types or type systems in general, or endless syntactic sugar debates. Such people would not find themselves comfortable within GHC Steering Committee and will be necessarily underrepresented. The public outcry with regards to the simplified subsumption was a clear indicator how detached GHC Steering Committee is and how little involvement the general public has with its proceedings. And this is not a fault of the public, because the community is never wrong.
Upd.: Detachment from users is an expected property of a committee (e. g., I’m not saying that CLC is any better). This is unlikely something that can be truly fixed, just something to bear in mind and acknowledge, not dismiss under a pretense that anyone can be elected.
It is an invitation, not a dismissal. The committee is running a call for nominations as we speak (until February 11th), see the announcement. EDIT: I got confused about the date, that was in 2022.
The community is not uniform. Some people are fine with breaking changes, some are not. The public outcry comes from the latter group, and I am trying to offer a solution. My apologies that the solution isn’t all rainbows and unicorns. I, for one, am perfectly fine with breaking changes such as simplified subsumption, but people who aren’t need to speak up during the decision-making process, not after.