It’s great to see the formation of a “Haskell Stability Working Group”. But I’m surprised that the key user groups in the charter only includes developers and researchers, and doesn’t mention educators. Haskell is taught to tens of thousands of students each year around the world, and this has been vital to the success and growth of the language. Without training new users, there is no future for Haskell! How about adding something like “[C] As a teaching language, to train the next generation of Haskell users” to the purpose of the working group?
My comment was about the groups charter, rather than the list of members [I’ve updated my original comment to clarify this.] Although it’s worthy of note that ‘Education’ isn’t one of the topic headings in the list of members.
Thanks Graham – you are not alone with others have pointing out our silence on education amnd training.
I have clarified our interest in this area by listing Dmitrii as an educator in the membership and charter.
In truth we very focused on the production side of things because that is what has to be fixed but also where an incredibly important class of consumers is to be found – those building and maintaining the ecosystem itself.
We do very much care about education and training though – a critical cog in the whole enterprise. You are very welcome to nominate anyone else, including yourself. Members do not have to attend all the calls – just the ones where they feel they have something to say.
However, I do think it would be beneficial to explicitly add something like "[C] As a teaching language, to train the next generation of Haskell users” to the stated purpose of the group.
In particular, the impact on educators (and the producers of educational material) of changes to the language can be very significant. This is particularly true for the large number of educators who are not active Haskell researchers or developers, who may not have the time or inclination to keep up with changes.
Old timers like me will always be keen to teach Haskell, but there are many others teaching Haskell that can easily be put off by too many changes, and we need these folks onside to help multiply the impact of the language.
No one is advocating freezing Haskell, halting all further language development.
Well, maybe not the working group, but I don’t think this reflects all of the community opinions.
For many users, Haskell already has everything they want and also has a lot of things they don’t need. For things Haskell doesn’t have, some users would rather use a specialized language (compare with F*).
One alternative would be to move to a policy of 100% stability, especially of the language itself and its APIs; that is, no breaking changes. There are other language ecosystems that do this. But the Haskell Foundation believes that would be a mistake to seek 100% stability, a mistake that would kill off the ferment of intellectual change, deep research, and innovative (and genuinely useful) features that characterise the Haskell ecosystem. Haskell is a thought leader, and must continue to grow and change.
Well, Haskell has not evolved at all since 2010. It is basically frozen and all attempts to move it forward have failed so far. That’s ok with some parts of the community, but not with others. And this is the actual elephant in the room, IMO.
Why can’t we have both stability and progress? Because there’s no usable Haskell standard that would allow you to use even a fraction of the ecosystem. Group A from your charter has basically already lost the game and is keeping up through freezing package versions and using old GHCs.
The only way to achieve both is fixing that:
create a new conservative Haskell report that would either allow to use large parts of the ecosystem or allow existing libraries to support the standard in a reasonable way (GHC2021 is not that)
have an easy way to tell GHC/cabal etc. that you’re building for a specific standard (this implicitly guarantees that building with a newer GHC against the same standard will always work)
support library authors of prominent packages to make them buildable with the new standard enforced (e.g. instead of wild CPPs around base versions, you’d simply check against which standard you’re building)
What does that achieve:
it allows two branches in both GHC and the ecosystem without actually forking GHC (maybe it also reduces the number of “running” GHC versions)
People who are not interested in new features can feel secure that their code compiles even with a new GHC version (modulo API changes to base/core libraries)
teaching material etc. can easily refer to this version of the standard and don’t need to worry that it’s useless or outdated
actually coming up with a new usable standard
additional toll on GHC maintenance: although I believe it may very well reduce the number of supported major GHC versions drastically, it will likely be a lot of work to keep the boundaries between Haskell-GHC and Haskell-20XX
additionall toll on library maintenance: if library authors are not interested to make their stuff buildable with a specific standard enforced, the whole idea falls flat
some libraries may never support a specific standard, because they use advanced/unsafe features
GHC versions are basically our way today to express with what language we are working. And that is insufficient.
Which of these groups of users would you put yourself in? And what would ‘Stability’ look like from your experience? Perhaps you could encourage those from other groupings to volunteer their views (please). It’s difficult to form an opinion from anecdotal commentary.
That’s great to hear. They’re the easiest to provide stability for: they should stick with their current release.
The formal Report hasn’t changed. The de facto standard (that is, the ‘central body’ of GHC excluding the bleeding edges) has evolved steadily. Is that an “elephant in the room”? I don’t see it as an “enormous topic” or that “no one wants to discuss it”. Haskell is different. The lack of a formal written standard doesn’t seem to be stopping “thousands of developers to get their day job done”.
I disagree. It’s the GHC LANGUAGE controls make it easy to fix a subset of features that support how you want to program/with what language you are working. Those features are stable from release to release, once they’ve settled behind the bleeding edge. If you need a formalised report-alike for your Haskell shop/perhaps for training newbies, just stitch together the feature descriptions.
I think that Haskell is unsuitable for formal verification as well as low-level security related things like cryptography. There are languages that solve these things better and from the ground up, without retrofitting concepts that barely fit into the main language/ecosystem.
I still find some of the type-level programming interesting, but 90% of my code (professional and personal) doesn’t need it.
I think I explained that sufficiently wrt GHC: when compiling against a certain standard, the GHC version you use shouldn’t matter. That is stability.
Sorry, I’m not running surveys at the moment.
That is not sufficient, as old GHC versions like e.g. 8.6.5 lack bugfixing, support for certain platforms (e.g. darwin aarch64), lack the new windows IO manager, the new GC, etc. etc.
As I explained in great detail, a compiler switch (really, C/C++ already have that) to build against a specific version of the standard could fix that. You could upgrade to newer GHC version without rolling with the constant change of semantics wrt plugins, language, etc. etc. while getting the benefits of bugfixes, improved platform support and so on.
GHC being the “de facto standard” is exactly the problem, because it doesn’t evolve as gracefully as a standard. It evolves erratic.
Absolutely not. GHC plugins are constantly changing, check the release notes of 9.0.1, which contains several breaking changes in plugins and language:
simplified subsumption causes some programs to not compile anymore (every now and then this edge case comes up on IRC)
breaking changes in DerivingVia
breaking changes in GADTs
breaking changes in NegativeLiterals
Those feature are old, yet not stable. Updating a 2 million LOC codebase from 8.10.7 to 9.0.1 could already cause immense work, depending on how many cases you hit.
This is orthogonal to training newbies. Stitching together random plugins doesn’t constitute a stable standard, nor a stable compiler API.
Again: the idea is to update GHC without any cost. This cannot be achieved, unless there’s a well-defined standard of the language that GHC can adhere to, when asked.
This is a really good point. Thank you for spelling this out, I had not really thought of it this way, and I think that separation (between the compiler and the standard) is indeed part of the puzzle for providing a mechanism for more graceful migrations (it covers changes to the standard).
EDIT: playing devil’s advocate… it seems GHC is already so “coupled” in those functions, separating them may end up just shuffling around abstractions (eg need more than just shuffling functionality around). More would be needed, and this doesn’t address the other pieces which see breaking changes.
Yes I was pretty sure that’s where you were going.
“These people” is one person (or one organisation); and by their own admission " we are a special case in that we modify GHC. … Some of this is self-inflicted".
Somebody who modifies a software artefact from an outside provider is of course asking for trouble every time that provider issues a new release. So not representative. And my reason for prodding you for evidence is I think you’ve cherry-picked some ‘worst case’ examples from various sources and blown them out of proportion. My saying “anecdotal commentary” is code for: you’re making stuff up.
Nevertheless, one person-year to upgrade from 8.4 to 8.8 sounds to me like negligible effort. I’m used to upgrades of complex software taking ten person-years of tech staff and 18 months of planning + execution, not to mention 20~30 person-years for end-user training, changes to procedures and documentation, etc.
The sky is not falling. I see you evaded my direct question “Which of these groups of users would you put yourself in?”.
for most people, “GHC” was synonymous with “Haskell”.
(and yes: I know people can use GHC’s -XHaskell98 and -XHaskell2010 options - our continual presence here and on other forums would indicate doing that isn’t a viable solution).
So just having “fixed-standard language” command-line options for GHC isn’t enough:
presumably there would also have to be Haskell '98, Haskell 2010, Haskell 2020, etc. standard-compatible versions of most of the major packages in Hackage, and elsewhere;
separate Hackage '98, Hackage 2010, Hackage [next version] as well as Cabal, Stack, ad infinitum i.e. a self-contained suite of packages and tools [and documentation] for each version of Haskell.
I for one like the idea of a new fixed language standard - it’s a proven solution to the documentation problem: that’s why Haskell '98 exists. But the existence of Hackage alone clearly shows that the Haskell ecosystem in 2022 is so much larger than it was back in 1998 - maybe that was one big reason why Haskell 2020 failed, and is certainly a problem for any future attempts at setting a new fixed standard, for both the language and libraries.
As for trying to define a new fixed standard just for the language, all the language extensions now in use across Haskell libraries far and wide would appear to leave future standardisers with a dilemma:
keep the language relatively compact, and most likely break every library in existence;
support as many libraries as possible, and most likely end up with a big, bloated language.
If you have any ideas for resolving this predicament, please contact the Haskell Foundation today!
We have enough threads (2!) on ghc-base-splitting at the moment so I don’t want to derail this one, but yes at some point when the working groups meets and we’ve experimented with that more (not ready yet!), I would love to discuss the that effort with this working group.
IMO it seems inarguable that supporting multiple versions of things concurrently, whether it’s language or libraries, has clear benefits: more stability, experimenters/researchers are just as happy, etc. The question is about the costs of things like -std= for GCC/Clang, long transition periods for breaking changes in Rustc, etc.
So the fundamental questions are:
How wide a status quo – brace new world window can we afford
When the window is maxed out, which endpoint compromises more?