Indeed! In the twitter thread I proposed using TH for this, as one can almost imagine the ability for the compiler to find migration TH to apply to code to update it and a standalone runner that can apply TH edits to source on disk.
One example I know of, and greatly benefited from, was with Terraform. The v0.11 of Terraform used HCLv1, and v0.12 included significant changes to the syntax with the adoption of HCLv2.
As part of the migration, the Terraform team provided us with a command to automate updating v0.11 code to the v0.12 syntax - Command: 0.12upgrade - Terraform by HashiCorp.
To do the upgrade by hand would have been atrocious, but the command made it manageable.
That said, I don’t think such a tool would significantly change the experience with Haskell. Even if we had the tooling in both stack
and cabal
, that doesn’t address the way our ecosystem is organized and structured, it doesn’t address the disconnect between policy and vision at GHCHQ, nor how we use PVP, nor the lack of a “standard library” with strong guarantees.
This situation isn’t as simple as we might like to believe, and it’s just a symptom of the deeply entrenched problems in the Haskell community.
I don’t understand all the maintainer panic. It’s not like (/=)
will go away at 3am yesterday.
We may have years ahead to get such warts removed if we need it. Stick a forward-compatible change in one of your releases during 2022 and move on. By 2023 we’ll have a language with one less thing to worry about. Ditto for other deprecations.
Haskell is touted as “easy to refactor” at every corner. And it is not far from truth. Let’s use the powers we preach.
(I do maintain a bunch of projects since GHC 7.0 and I’m fine with it.)
I can relate to all of this, it matches my experience. Of course anything can be fixed and evolved with a sufficient amount of elbow grease but I’m afraid there was nothing “technical” about this whole uproar.
In fact I started this thread to understand those more social and psychological aspects of resistance to change, or perceived lack of “control” over a shared common resource [1] such as base
.
I find it extremely indicative that very experienced users of the language have diametrically opposite views on this situation (see few comments above by @sclv and @chrisdone : “it’s a small change, nbd” / "you are f*ing up, people are rage-quitting").
Of course you can’t control what people say on social media but a few loud and authoritative voices sent shockwaves through the Haskell echo chamber, which was demoralizing. I was particularly disappointed with L. Augustsson, G. Hutton and E. Meijer, founding figures in Haskell in their own right, who didn’t involve themselves with the discussion process but only showed up on twitter to complain and shame in a very non-constructive way.
It’s high time we stop talking past each other since we all draw at the same well at the end of the day.
[1] E. Ostrom, Governing the Commons, Cambridge
Also, the Github discussion on guidelines for breaking changes shared by @tonyday567 is a good thing. We should all provide perspective and actionable insight to the CLC over there. They are doing a hell of a job in keeping the ship steady.
Time for a little historical context
3.7 Haskell and Haskell 98
The goal of using Haskell for research demands evolution, while using the language for teaching and applications requires stability. At the beginning, the emphasis was firmly on evolution. The preface of every version of the Haskell Report states: “The committee hopes that Haskell can serve as a basis for future research in language design. We hope that extensions or variants of the language may appear, incorporating experimental features.”
However, as Haskell started to become popular, we started to get complaints about changes in the language, and questions about what our plans were. “I want to write a book about Haskell, but I can’t do that if the language keeps changing” is a typical, and fully justified, example.
Many of the criticisms leveled at this change seem to belong in this category: “I can’t teach/use Haskell with confidence if it’s constantly changing …” - however:
We made no attempt to discourage variants of Haskell other than Haskell 98; on the contrary, we explicitly encouraged the further development of the language. The nomenclature encourages the idea that “Haskell 98” is a stable variant of the language, while its free-spirited children are free to term themselves “Haskell.”
So by this observation:
- that
(/=)
was a method ofEq
on 2021-11-11 - then was just another function on 2021-11-12
means Haskell [sans version] is proceeding generally as expected:
the Haskell community […] usually not only absorbs language changes but positively welcomes them: it’s like throwing red meat to hyenas.
…but judging by a few of the comments here, some of those hyenas are now begging for Alka-Seltzer(R). There was an attempt to bring some order to this chaos by providing another stable version of the language:
https://mail.haskell.org/pipermail/haskell-prime/2016-April/004050.html
which eventually went nowhere:
https://reasonablypolymorphic.com/blog/haskell202x
…and here we all are today, trying in our own ways to find the balance between evolution and stability. Here’s an idea: let’s call the language & version supported by e.g. GHC v. 9.2.2 “Haskell 1.9.2.2”, based on the following observation:
- early versions of Haskell: v. 1.0 to 1.4
- Haskell 98: v. 1.5
- Haskell 98 with addenda: v. 1.52
- Haskell 2010: v 1.6
- Haskell 2010 with GHC-isms: v. 1.7.0.1
…and so forth (I’m conveniently ignoring the advent of “Dependent Haskell”: maybe a whole new language, like how Raku appeared? :-). If it’s possible, doing this officially could help to ease the tension between stability and evolution:
-
Academics (and perhaps builders of alternate Haskell implementations) can choose the specific version of Haskell that interests them;
-
The removers of “warts and moles” from Haskell don’t have to wait [indefinitely?] for e.g. “Haskell 2024” to have those changes accepted: they just need to increment the current Haskell version and update the Haskell Report accordingly, along with any other pertinent documentation.
One other possible benefit:
- Updating said documentation can also be made a requirement for adding new language features.
Of course, you still can add or remove all the features you like to your own private copy of GHC, but your changes will only be given any serious consideration if you’ve documented it well (e.g. as a draft revision of official documentation). Not only does this focus the attention of those proposing potentially-breaking changes, it could allow more of them to be aggregated together in each new Haskell version - occasional large changes instead of frequent small ones.
Alternately, just adopt the useful parts of the approach the Rust language relies on to strike it’s balance between stability and evolution, and adapt them according.
But all this talk of decisions in hindsight regarding head
and tail
being in the Haskell prelude or (/=)
being a method of Eq
may be missing the point - in hindsight (again!), perhaps the better decision was for the original Haskell committee to design two non-strict functional languages instead of one:
-
a “concept language” intended for evolution, which charges ahead with new concepts and ideas;
-
a “production language” intended for stability, which follows behind cautiously at a safe distance and picking up only the most enduring and useful of features.
Was this a setup for one of those “Haskellers would literally fork the compiler instead of going to therapy” jokes?
I like “we all draw at the same well”. We have different priorities and different goals. We have different cost models: what is a minor inconvenience for one person is a huge pain for another. And yet we share a common language and a common ecosystem, one that we all care about and all want to succeed.
If we yell at each other, we aren’t going to make that commons better. We have a better chance if we (somewhat dispassionately) share our varying priorities and cost models. It’s not a silver bullet – some goals are truly incompatible – but it’s the best shot we have.
Is the book you cite a good one? This one, right?
What I’m indeed surprised about is that this discussion happened wrt the No /= in Eq proposal and not in the Dependent Haskell one.
From how I read the discussion, people want clarity, so they can manage their expectations. That may (for some) mean they will leave the community. Others will set different priorities (e.g. won’t update their libraries all the time). Some may even consider a fork of GHC, who knows.
I think the important point is not to try to keep everyone in the same boat, but be up-front about where things are heading and allow people to do their own thing, without dramatic incidents.
Even if part of the community decides to go a different route in some aspects… that doesn’t mean there can’t be useful synergy. And this clarity may in fact reduce friction, even if it contributes a little bit to increased fragmentation.
As an example: if everyone understood that base
primary concern is backwards compat, many users of it would stop trying to push large changes into it and alternative preludes would maybe become even more common. That’s fine. That’s open-source dynamics. But there must be a path forward, so people don’t waste their time due to confusion in priorities.
I’m currently reading it and found it fits perfectly our current situation, even though it was written before the notion of open source software even existed (1990). It is very well written and illuminating, as it covers examples and models of collective governance from a number of disciplines. Highly recommended
Now that you’ve mentioned it:
2.6 Was Haskell a joke?
The first edition of the Haskell Report was published on April 1, 1990.
Alright, would an updated Haskell standard do? After all:
- lots of decisions are involved, and
- some level of “mutual agreement” is achieved.
The CLC could then use both of these to guide its decisions going forward.
If we consider the whole paragraph from Being lazy with class, sec. 3.7 :
The fact that Haskell has, thus far, managed the tension between these two strands of development is perhaps due to an accidental virtue: Haskell has not become too successful. The trouble with runaway success, such as that of Java, is that you get too many users, and the language becomes bogged down in standards, user groups, and legacy issues. In contrast, the Haskell community is
small enough, and agile enough, that it usually not only absorbs language changes but positively welcomes them: it’s like throwing red meat to hyenas.
My interpretation of the zeitgeist is that we are facing those growing pains. It’s not that the “hyenas want alka-seltzer” (good one btw), but simply that times have changed, and new animals roam the savannah.
Haskell is definitely not as big as Java, but I’m guessing it has a much larger (and diverse) user base than in 2007 (when Being lazy with class was published), and indeed we are facing complex social and political dynamics, and hard economical calls to make.
@atravers In 2021 we are “stuck” with a single Haskell compiler and core libraries environment; Hugs is no longer in development and the various research compilers (UHC , etc.) cannot keep up with the pace at which new research is merged into GHC. On the industrial side, valiant efforts like Eta have been bogged by the same problem; forking means duplication of much effort.
The most vocal negative feedback on the /=
proposal concerns time (and therefore money) : “Why should updating dependencies break my code?”.
I completely agree with @hasufell : GHC and the core libraries must make (in-)stability expectations as explicit (and motivated) as possible going forward.
Apart from being a textbook example of bikeshedding, the two are different because hopefully (and as far as I can pick up from the current writeups) dependent types will be opt-in instead of a forced change, whatever that means.
Yes, I agree too. My personal view (I am not speaking for the HF here) is that
- A distinctive and valuable feature of Haskell is that it does grow and change, even at the cost of some backward-incompatibility.
- We should strive to minimise the costs of such changes by strategies such as
- Providing ample warning
- Giving opportunity for many voices to be heard; perhaps there are unanticipated costs that might change the judgement balance.
- Offering mitigation strategies and workarounds; perhaps introduce changes in several steps (e.g. warnings first, before the change itself)
- Avoiding gratuitous back-incompat (e.g. “change for change’s sake”)
It’s not a black-and-white thing, obviously. But most production languages err strongly towards 100% back-compat, and Haskell is (historically at least) different to that. For me personally, a principle is that I’d like Haskell to “make sense” when viewed as a whole with no history; rather than to enshrine in perpetuity a series of historically-driven mutually-inconsistent choices.
It might help if the HF (in consultation with the community) framed some general principles to guide the GHC steering committee, CLC, and others, and to help avoid the unpleasant surprises that arise from mis-matched expectations.
Because that’s the price of “floating” Haskell (no fixed standard).
These days, the closest a developer can get to a fixed standard of the language is to keep and tend to their own private copy (or copies!) of GHC for as long as possible: one with the necessary features for their own programs - a “mini-fork” of sorts. That makes for a lot of duplicated work - much of which can be avoided with a fixed standard.
But fixed standards also have their price, and if the fate of Haskell 2020 is any guide then the Haskell community now considers that too expensive. So Haskell is left “floating” and now subject to change, depending on how well the argument is made for said change, leaving us to deal with ongoing breakage or ongoing (and duplicated) maintenance of mini-forks - or both D-:
GHC and the core libraries must make (in-)stability expectations as explicit (and motivated) as possible going forward.
How would this help? Some users would still complain, it just won’t be when their old code no longer compiles - they’ll just petition for the breaking change to not happen. That leaves two options:
- try to write a tool which can update old code, or
- endure the complaints e.g. about Haskell “being a toy”.
Haskell-to-Haskell translators? That seems almost as complicated as writing an all-new compiler! Maybe therapy for the disappointed is the simplest option…
- A distinctive and valuable feature of Haskell is that it does grow and change, even at the cost of some backward-incompatibility.
…case in point: the banishing of the monomorphism restriction - presumably no-one wants that part of the language back!
It might help if the HF (in consultation with the community) framed some general principles to guide the GHC steering committee, CLC, and others, and to help avoid the unpleasant surprises that arise from mis-matched expectations.
…in the absence of an encompassing language standard to refer to, this could be an workable alternative, provided “discussions” don’t continue on indefinitely. At some point, decisions must be taken - no level of “guidance” (or bureaucracy) changes this basic fact.
I’ve written a thing , looking forward to all feedback :
I want to talk a bit about what I think about this topic.
So, what do I think about this topic?
I think this topic should be considered in social, historical and even teleological context. It is not merely that some people want one thing (say, nice core libraries) and others want another (say, timeless learning materials). There is a reason why people want this or that — this reason is what matters. And people can be wrong about what is best for them.
So, what do people want?
-
There is a desire to see big changes to the core libraries, or even a whole new set of core libraries. See this big thread.
-
There is a desire to have a reliable tool for building big systems. See this comment for example.
The problem is that it is impossible to have both.
The standard solution to this problem is to clarify the purpose. This is mentioned in this comment. But what if there are several purposes? There is a proposal to split the ecosystem.
How should we evaluate these possibilities?
-
There is a local way. «This is good for me, therefore I am going to support it.» We ask everyone to evaluate a local optimum and then vote to either pick one or take a weighted average. This is what I think was happening so far. Everyone pulling the blanket their way.
-
There is a global way. «Amplify Haskell’s impact on humanity.» This is the mission statement of the Haskell Foundation. But the Haskell Foundation does not have the authority to decide upon socially divisive problems. So, we are still stuck at the average between local optima.
Can we pursue the global way as a community?
It seems that we should account for several aspects of the problem. My picture is like this:
- There is a space-time continuum. Space is the landscape of society, time is cultural change. (To be concrete, take the relative adoption of Haskell in various markets as dimensions and vary it over time.)
- Haskell is drawing a curve in this continuum. The aim of this curve is either a clearly stated purpose, or an unwritten average determined by historical accident.
- There is no option for the curve to have no aim. Since it is mostly smooth, it has a tangent vector along most of its length.
This can also be said in terms of enterprise management. There are customer segments and Haskell can strive to offer value to some but not all of the possible segments. The outcome is that either Haskell has a big market share overall (many opportunities for paid employment, many well-supported libraries) or a small market share overall (few opportunities for paid employment, few well-supported libraries).
So, goals we pursue → vector of cultural change → overall market share → good things.
The current situation is that Haskell has about 3% market share and no growth to speak of.
How can these numbers be improved?
- This is not a question of technical expertise, but of business administration.
- This is not a question of keeping the work of maintainers easy. Maybe we can instead make it harder but better paid. Then more people will be able to afford to do it.
- Haskell Foundation is not going to make hard decisions for us. There is no management for Haskell.
The only way forward I can see is for the community to organize itself. And not around technical expertise, but around social, enterprise and strategic expertise. Splitting hairs like whether /=
should be a method is a waste of everyone’s time. There is bigger fish to fry.
This of course involves redistribution of power. Currently those most able to split hairs hold all the power. I do not see this working. We need someone to look outside the Haskell bubble, someone to crunch the numbers (like say these dream metrics), someone to talk about big changes (like say splitting the ecosystem — see above).
The first step is to re-frame the discourse to account for social, historical and even teleological context. That is what I think, anyway.
Thanks!
One of the goals of creating the HF is to have a body that can make hard decisions. We have no power of enforcement, but we do hope to point ourselves to hard problems and make hard decisions, informed by the community and by data collection.
As one of the chief hair-splitters: I agree. It’s one of the reasons I support the HF and am thrilled to work with people paid to make business decisions around the future of Haskell.