The evolution of GHC

Miranda download page. (Beware the caveats about its vintage; and that its wikipedia page is mostly in the past tense. I think @atravers had tongue firmly in cheek.)

For completeness, see this to download Hugs. What Hugs provides is just as much a nonstrict functional language that isn’t GHC.

  • For Miranda(R) documentation, see the home page - it has links to manuals, textbooks and papers.

  • Both Miranda(R) and Hugs are implemented in C.

…I was inspired: I wasn’t expecting Hugs to be suggested as an alternative (I too have been been a regular user of Hugs over the years.) However, to quote the Hugs homepage:

Note: Hugs is no longer in development. The content on these pages is provided as a historical reference and as a resource for those who are still interested in experimenting with or otherwise exploring the system.

That is what I meant by GHC being the only implementation to successfully transition to Haskell 2010 - presumably the Wikipedia entry for Hugs also makes extensive use of past tense. Having said that, who would have expected a 64-bit release for Miranda(R) after all these years - maybe Hugs can be reinvigorated as well…

WinHugs happily compiles in MS Visual Studio – in 64-bit if you insist. (And to be precise, it’s implemented in C++ – pre-1990’s C++ AFAICT.) And you can tweak its parser.yacc and type.checker, and teach it to be cleverer with instances/overlaps/FunDeps.

When you say “losing people to other projects” do you mean losing developers of GHC or users of GHC (i.e. users of Haskell)? In either case could you provide some evidence that this is actually happening? It’s completely contrary to what I’m seeing, as I elucidated above.

Lots of the “old gang” that started all sorts of Haskell projects in the early days are gone, true. Some of those left a trail of abandoned packages. The days of early adopters are over and with that maybe some of the exciting flair and heated ML discussions. But overall the community is much bigger now, there’s really no question about that.

Some of what I read here really seems more like nostalgia about those old days. People finally got jobs, researchers moved on to new topics and teachers have their powerpoints already sorted.

Wrt GHC: I totally get the point and I want to highlight this quote

…which I totally agree with.

But at the same time I feel @AntC2 wildly (maybe accidentially) misrepresented the work of the current GHC maintainers. They’re not working on new fancy language features 24/7. If you hang out in the development channels and read the ghc activities reports, you’ll see they’re working on much more, including bugfixing, performance improvements, new architecture support, new GC (did you know?), etc. etc.

Yes, there’s very little pushback on radical language feature proposals (including those that are not even complete, like dependent types)… but this is maybe attributed to some form of pragmatism of keeping the few compiler/language contributors engaged that we have.

I think there are a couple of ideas to discuss, e.g.:

  1. create GHC LTS releases and don’t spread across too many branches… I feel there are too many new major GHC versions. But if this really reduces maintenance load or not… I don’t know.
  2. fork GHC-8.10.7 and simply freeze language features. I guess for most industrial users doing this is still more work than upgrading to a new major GHC version every couple of years. So there would need to be more drive into this direction.
7 Likes

Again, I do not think the problem is the rate of change.

The problem is with the process used to roll out that change.

This problem is, unfortunately, complicated, and bigger and more nuanced than any one person can actively keep in their mind all at once. Another part of the problem is that there’s a disconnect between different groups of people involved in creating the experience.

Solving this problem requires a fair number of people communicating and working together to find the improvements to our process that we’re missing.

Fortunately, we can do this iteratively, and we can start this now. It’s not too late, though it’ll get harder the more frustrated and disconnected we are. So it requires patience and consistent attention.

I think the hard part is getting the right people together in a group and talking about the problems and possible solutions. But I still think we should do it.

EDIT: this is super exciting! Haskell Foundation Stability Working Group

2 Likes

I might be beginning to sound like a stuck record here, but we are not becoming more frustrated and disconnected. We are becoming less. Apologies if I am mistaken @ketzacoatl, but I don’t think you were around ten years ago when there were huge amounts of fractious argument and strife. From my point of view, relative to ten years ago, the Haskell community is wonderfully placid and optimistic! Granted, it may look different to someone who has entered the community more recently.

And granted we would like to lower the level of frustration and increase the amount of connection regardless. I don’t disagree with the overall tone in this thread that we can do better on many axes! But I think we will be more effective if we start with an accurate assessment of the status quo. We are (relatively speaking) a harmonious community, energised and optimistic. In some situations panic is justified, in some situations time is close to running out, and the people who find themselves in those situations must respond accordingly.

I think it our situation it will be most effective to start from an assumption that the community has good forwards momentum and good morale, and work out how to harness that for greater productivity. If we start from the assumption that we are frustrated and disconnected then that sets a negative tone which will pervade community activities and actually be counterproductive, in my opinion.

Now, I’m willing to be wrong. If there’s hard evidence that my assessment is wrong then I’d like to be corrected so I can reset to a more realistic position! If you have some, please share.

3 Likes

I agree that we have an opportunity to turn the tide, but I don’t see resolution for the disagreements and rifts from ten+ years ago.

Those heated arguments led to an ecosystem of new packages, and eventually to stackage, and stack. Those folks stopped short of forking GHC, but I would imagine it was considered more than once (the only thing stopping them being the amount of work and the concern for the impact from another fracture).

Those rifts still exist in our ecosystem, and I don’t see how those are being resolved. In fact, I think we’re at the point where some people are choosing to move on and focus their attention elsewhere.

EDIT:

I think we’re not hearing some of this b/c those people have already said it many times, and eventually got to the point where it no longer felt worthwhile to invest more of their energy into pushing something that would not move, and so they have moved on. I would imagine we have the opportunity to win back at least some of these contributors, but that would depend on how we as a community respond.

Totally agreed vis-a-vis the facts and observations. I just think it’s more helpful to interpret our community as finding itself presented with a number of tantalising opportunities for improvement rather than on the cusp of disaster requiring emergency action to avert.

I agree, though I am also doing my best to be as realistic and direct on the matter as I am able to do. I don’t think it helps to ignore or downplay the potential loss that’s out at play, in the cards on the table.

I agree there’s a big difference between where we could be with the right effort and where we could quite likely be without it.

1 Like

Yeah it seems the stack-cabal-wars simmered down, but then many of the stack people also basically left for Rust. First part great, second part not so great!

You’re asking the wrong person for said evidence:

Hrm:

…maybe you’re right - the problem may be not be the rate of change. But you also say the current process isn’t working, and so far:

  • I can’t think of a solution at the moment;
  • I haven’t really seen a solution here - also at the moment.

Until that changes:

  • do we just keep on going at the current pace of $ACTIVITY with that deficient process, and just keep adding to the problem;
  • or do we slow it down in an attempt to mitigate the problem?

Of course, that could just be a lack of imagination of my part - if you can see another option, I for one would be very pleased to see it…

IMHO, we use these pressures to figure out how to improve the change process. Eg we do it now.

Also, it’s not that there solutions do not exist. There are solutions available, but they are locked up as pieces in people’s heads/experiences. There needs to be enough communication and thought across a group of interested people, and then the possible solutions will be realized, enumerated, debated, and if we have enough fortitude and patience, decided and implemented.

I wasn’t aware I’d made any representations about the work of current GHC maintainers. All I said was that what’s getting delivered and planned makes the language less appealing for me. Fancier GC is not so exciting I’m going to upgrade to 9.2.

This suggestion I don’t understand: old releases of GHC (including their base libs) remain available for decades. Nobody needs to actively fork/freeze anything. Perhaps you mean freezing all the other libraries/repositories? That’s not for GHC to do.

Isn’t this what has been happening for the last few years, if not the decade? Clearly it hasn’t been working:

…that pressure is instead driving people away.


…because “this time it will work - it just has to! Not really: failure is always an option.

Are there any other suggestions?

1 Like

Right. Because software does not have bugs.

1 Like

I’m not sure I follow the thought process here, but I’m not sure my comprehension matters much.

To clarify my comment about using the pressure to figure out next steps: this was in response to your suggestion that we instead slow down change.

I’m saying that you cannot hold back the tide, and that asking the community to slow down the rate of change has not worked, nor been acceptable in the past, nor do I think it’ll work better now. I just don’t see it as a viable option, and I wouldn’t get behind it.

In my funny little world, when you have a broken or inefficient workflow, you don’t ask the contributors to stop their work, you find ways to fix and improve the workflow. To do that, you talk to the people doing the work, and the people beyond who are affected by the particulars of the workflow, and you find improvements that everyone agrees will be an improvement.

1 Like

Let’s change from work flow to traffic flow: you’re in charge of a multi-lane highway and strange bumps have started to appear. Until you have a solution:

  • do you assume everything is fine, keep all lanes open and leave it to drivers to dodge those bumps;
  • or are you cautious and close down some lanes, including the ones with bumps?

…not stop their work, just slow it down while you’re finding effective ways to fix and improve the workflow. It just makes no sense to keep the current rate of $ACTIVITY contributions when, according to you, the workflow is problematic.

According to this:

…expecting the “rising pressure” to now somehow result in solutions - in this case - also seems less than viable. But if you want to prove me wrong, go for it!