There are a few things going on here. To zoom out a bit and make a high-level summary:
My main interest is to improve overall Haskell ecosystem quality. I have noticed from many esteemed individuals over the last 12 months or so that there is an assumption that slowing down GHC releases is essential to improving quality, because that way there will be less breakage to deal with. Whilst I sympathise with the frustrations of dealing with breakage[1] I find this point of view mistaken. The correct way to measure breakage is not in terms of releases frequency but in terms of breakage per unit time! That is to say, GHC could release daily but never break[2]. I urge people to disentangle in their minds the concepts of “frequency of release” and “breakage per unit time”. They need not be strongly connected.
Before we get to frequency, I want to make the case that decreased release latency would definitely help mitigate the costs of breakage. Consider this: the earliest commit that goes into a GHC release is made roughly 9 months before the release[3]. Many companies and individuals are waiting 12-24 months before upgrading to a new version of GHC, because their entire suite of package dependencies also has to catch up, amongst other reasons. Let’s say that, conservatively, a lot of GHC commits are waiting at least 24 months to get into the hands of users. There’s a lot of latency in that pipeline! It’s very easy for breakage to go completely unaddressed because so much additional work has been done on top of the breaking change in the intervening two years, and it’s become too hard to do anything about it.
Consider an alternative fantasy world: commits get into the hands of users within 24 hours. Then users are aware of breakage much more quickly, can notify upstream about it much more quickly, and there’s a chance the breakage can be reverted or easily mitigated.
A real world example of this is the DeepSubsumption
reintroduction. There was about an 18 month lag between the 9.0.1 that removed deep subsumption, and the compatibility flag being reintroduced in 9.2.4 (even longer than 18 month between the commits that removed it and the compatibility flag). There’s a whole compiler version (9.0 series) that some users can never use! And this is perhaps the best of example of where the costs of GHC breakage were mitigated. Mostly breakage is never satisfactorily dealt with. The ecosystem must simply cope with it, somehow.
Where does frequency of release come in? Well, as touched on above, I think that reduced latency would be an important factor in improving quality. I also think it is unlikely that we can reduce latency without also increasing release frequency. A system gets good at doing what it does, and bad at doing what it doesn’t do. If releases continue to happen at the rate of only two per year then the system will never really have enough opportunity to exercise the parts of itself that need to improve in order to reduce latency. As @int-index says, automation has a vital role to play in this. But something that happens only twice a year can never really satisfactorily be automated. There simply isn’t enough selective pressure to make that automation actually good.
But, I am not saying that increasing frequency of releases is definitely going to improve Haskell ecosystem quality. It might even make it worse. It definitely will make it worse if done in the wrong way. I am not making a proposal of any sort. I am not asking anyone to do any additional work, and there is no need for anyone to feel “chills” about this topic being discussed. Furthermore, I think there is a lot of lower hanging fruit when it comes to quality. I don’t suggest that release frequency should be our top priority.
I am merely inviting people to consider the notion of GHC release frequency from a different perspective. That perspective is only possible once the notion of release frequency has become disentangled from the notion of breakage per unit time. Maybe nothing will come of this line of thinking, but I think it’s interesting and promising, which is why I’m inviting everyone to think it through with me.
[1] and I personally invest time in this, volunteering with the stability working group to determine ways to reduce breakage and mitigate its costs
[2] in an ideal world – in the real world we would fall short, but we could fall much less short that we currently are, in my assessment
[3] certainly more than 6, probably not as much as 12, but I’m not an expert in these matters, so let’s say 9 as a rough figure