The evolution of GHC

I agree to some extent that this thread is a little wishy washy, but I will contribute my thoughts anyway as a data point… I just found my first Haskell job, after a job search that had no intentions of being Haskell-only. By the end of the search I was turning down some FP and even some Haskell because there were more options than I could hope to follow up on. And this is with the constraint of being on the US west coast, and specifically the Bay Area, which is probably one of the worse places to find junior non-research Haskell.

Also I note here that the problem that really needs addressing is not that changes should happen more or less often, but that they must (with a heavy emphasis) be automated. Haskell provides a beautiful guarantee - referential transparency - that makes refactors potentially infinitely more powerful than other languages. I think that that should be the primary focus - automated migration tools and the like. I know it’s a lofty goal… but imagine that Haskell were the language where “when the community decides a mistake was made, it is automatically fixed”. Right now, Haskell is “when the community decides a mistake was made, it is fixed, and everyone suffers that fix”. If we could market as the “aggressively better” and “improvement for free” - that’s an incredibly powerful message. This is the difference between “Monad of no return lost me 5 hours and continuous dependency hell” and “Monad of no return made my code better and I loved it”.

I also don’t think Haskell’s message is “correctness” anymore, because of the less-pleasant memory situation and lack of dependent/refinement/termination types. Unfortunately I fear that Rust/Agda will take over correctness in the appropriate domains (systems and maximum verification). But Haskell still has incredible potential as a general purpose lang, and with dependent types, could be the language for hyper-maintainable software targeted at difficult problems with high assurance requirements.

Thanks for bringing this chat up - as much as we are all just spitballing here, this has provoked to reinvestigate some plans I had for this “global refactoring” tool I mention… So at least it provoked some volunteer hours from me.

4 Likes

Interesting perspective! I want to ask, How do you keep backwards compatibility with the augo migration tool?

Well this is part of the issue - is that we don’t have an expression based CPP replacement. At one point I cooked up a proposal for a replacement, and such a thing has been proposed by Matthew Pickering and others. The topic has at least been discussed in the literature - though I can’t comment on the extent/whether there is any impetus to get things moving.

This video links to some papers within the presentation. I will say that I don’t think we need to go that deep - we can get away with a better CPP without having full support. Also we can probably also get away with CPP for basic cases, and use variable assignments/type declarations for complex examples.

But I do wish we talked about these ideas a little more (I’m not claiming to be the first to mention these ideas, just that the GHC base conversation et al get a little more airtime) - the benefit of such tools is immediate and alleviates the frustrations of many parties.

Oh weird that talk is credited to @Ericson2314 ! Perhaps he can shed more light than I can :slight_smile:

I agree this would be great! However, we can’t build a tool for that step in the process if our process is the bigger part of the problem.

What I mean is, the primary improvements we can make here are WRT our process for rolling out change. ATM our process leads to the pain we’re looking to reduce. So we need to understand the deficiencies in the process and make changes that improve the experience in rolling out these changes. Let’s call that a “graceful migration”.

Right now, as a community, we don’t really understand how to do a graceful migration. This is the challenge to do something about.

Unfortunately, I don’t have enough experience with Haskell or the details of these changes to speak with authority on specific changes to make to the process (though I’ve seen others mention some of these points, however I haven’t been able to locate those threads/discussion again, and those people have checked out from this part of the haskell experience, so I don’t see those points being re-iterated).

ATM we can barely get ourselves out of this box, so DH/etc aren’t going to do much for us if we can’t figure this out.

1 Like

A CPP replacement with those features is a long way off. Working on GHC is currently too unproductive to justify the major investment. Maybe if a research group thinks they could get some papers published that would tip the scales, because PhD person-hours are cheap.

I opened Pre-Pre-HFTP: Decoupling base and GHC - #28 by Ericson2314 because I think that is much lower-hanging fruit. If we could use new base with old GHC, we could make changes to base faster without doing CPP nor running afoul of the 3 release policy, and that side-steps the issue.

Alright, I’m here again: I’ve had a breaking change of my own to deal with, which eventually lead to a change of OS…oh what fun :-/


I now believe it’s worse that that: we don’t know how to continually do graceful migrations, particularly at the rate of change/work/progress/etc. both Haskell and GHC, as projects, are hurtling along at:

  • do we just keep hurtling along while we wait for a solution to appear?
    • We could be waiting a while for that solution (if one even exists), and in the meantime more and more people are being frustrated to the point of seeking out other projects.

If the problem currently has no solution, that leaves mitigation as the only other option, to reduce the problem to a manageable (or tolerable!) size. There’s one simple way to do that: reduce the rate of activity in both projects.

To those who would be annoyed by such a slowdown: if we just keep on going and continue losing people to other projects, then we risk having a very abrupt slowdown, the kind that can end a project - just imagine how annoying that would be!

Then again, Haskell isn’t the only nonstrict functional language - there’s still Miranda(R) and Clean…

1 Like

Sorry, but can I ask How do I use Miranda?

Miranda download page. (Beware the caveats about its vintage; and that its wikipedia page is mostly in the past tense. I think @atravers had tongue firmly in cheek.)

For completeness, see this to download Hugs. What Hugs provides is just as much a nonstrict functional language that isn’t GHC.

  • For Miranda(R) documentation, see the home page - it has links to manuals, textbooks and papers.

  • Both Miranda(R) and Hugs are implemented in C.

…I was inspired: I wasn’t expecting Hugs to be suggested as an alternative (I too have been been a regular user of Hugs over the years.) However, to quote the Hugs homepage:

Note: Hugs is no longer in development. The content on these pages is provided as a historical reference and as a resource for those who are still interested in experimenting with or otherwise exploring the system.

That is what I meant by GHC being the only implementation to successfully transition to Haskell 2010 - presumably the Wikipedia entry for Hugs also makes extensive use of past tense. Having said that, who would have expected a 64-bit release for Miranda(R) after all these years - maybe Hugs can be reinvigorated as well…

WinHugs happily compiles in MS Visual Studio – in 64-bit if you insist. (And to be precise, it’s implemented in C++ – pre-1990’s C++ AFAICT.) And you can tweak its parser.yacc and type.checker, and teach it to be cleverer with instances/overlaps/FunDeps.

When you say “losing people to other projects” do you mean losing developers of GHC or users of GHC (i.e. users of Haskell)? In either case could you provide some evidence that this is actually happening? It’s completely contrary to what I’m seeing, as I elucidated above.

Lots of the “old gang” that started all sorts of Haskell projects in the early days are gone, true. Some of those left a trail of abandoned packages. The days of early adopters are over and with that maybe some of the exciting flair and heated ML discussions. But overall the community is much bigger now, there’s really no question about that.

Some of what I read here really seems more like nostalgia about those old days. People finally got jobs, researchers moved on to new topics and teachers have their powerpoints already sorted.

Wrt GHC: I totally get the point and I want to highlight this quote

…which I totally agree with.

But at the same time I feel @AntC2 wildly (maybe accidentially) misrepresented the work of the current GHC maintainers. They’re not working on new fancy language features 24/7. If you hang out in the development channels and read the ghc activities reports, you’ll see they’re working on much more, including bugfixing, performance improvements, new architecture support, new GC (did you know?), etc. etc.

Yes, there’s very little pushback on radical language feature proposals (including those that are not even complete, like dependent types)… but this is maybe attributed to some form of pragmatism of keeping the few compiler/language contributors engaged that we have.

I think there are a couple of ideas to discuss, e.g.:

  1. create GHC LTS releases and don’t spread across too many branches… I feel there are too many new major GHC versions. But if this really reduces maintenance load or not… I don’t know.
  2. fork GHC-8.10.7 and simply freeze language features. I guess for most industrial users doing this is still more work than upgrading to a new major GHC version every couple of years. So there would need to be more drive into this direction.
7 Likes

Again, I do not think the problem is the rate of change.

The problem is with the process used to roll out that change.

This problem is, unfortunately, complicated, and bigger and more nuanced than any one person can actively keep in their mind all at once. Another part of the problem is that there’s a disconnect between different groups of people involved in creating the experience.

Solving this problem requires a fair number of people communicating and working together to find the improvements to our process that we’re missing.

Fortunately, we can do this iteratively, and we can start this now. It’s not too late, though it’ll get harder the more frustrated and disconnected we are. So it requires patience and consistent attention.

I think the hard part is getting the right people together in a group and talking about the problems and possible solutions. But I still think we should do it.

EDIT: this is super exciting! Haskell Foundation Stability Working Group

2 Likes

I might be beginning to sound like a stuck record here, but we are not becoming more frustrated and disconnected. We are becoming less. Apologies if I am mistaken @ketzacoatl, but I don’t think you were around ten years ago when there were huge amounts of fractious argument and strife. From my point of view, relative to ten years ago, the Haskell community is wonderfully placid and optimistic! Granted, it may look different to someone who has entered the community more recently.

And granted we would like to lower the level of frustration and increase the amount of connection regardless. I don’t disagree with the overall tone in this thread that we can do better on many axes! But I think we will be more effective if we start with an accurate assessment of the status quo. We are (relatively speaking) a harmonious community, energised and optimistic. In some situations panic is justified, in some situations time is close to running out, and the people who find themselves in those situations must respond accordingly.

I think it our situation it will be most effective to start from an assumption that the community has good forwards momentum and good morale, and work out how to harness that for greater productivity. If we start from the assumption that we are frustrated and disconnected then that sets a negative tone which will pervade community activities and actually be counterproductive, in my opinion.

Now, I’m willing to be wrong. If there’s hard evidence that my assessment is wrong then I’d like to be corrected so I can reset to a more realistic position! If you have some, please share.

3 Likes

I agree that we have an opportunity to turn the tide, but I don’t see resolution for the disagreements and rifts from ten+ years ago.

Those heated arguments led to an ecosystem of new packages, and eventually to stackage, and stack. Those folks stopped short of forking GHC, but I would imagine it was considered more than once (the only thing stopping them being the amount of work and the concern for the impact from another fracture).

Those rifts still exist in our ecosystem, and I don’t see how those are being resolved. In fact, I think we’re at the point where some people are choosing to move on and focus their attention elsewhere.

EDIT:

I think we’re not hearing some of this b/c those people have already said it many times, and eventually got to the point where it no longer felt worthwhile to invest more of their energy into pushing something that would not move, and so they have moved on. I would imagine we have the opportunity to win back at least some of these contributors, but that would depend on how we as a community respond.

Totally agreed vis-a-vis the facts and observations. I just think it’s more helpful to interpret our community as finding itself presented with a number of tantalising opportunities for improvement rather than on the cusp of disaster requiring emergency action to avert.

I agree, though I am also doing my best to be as realistic and direct on the matter as I am able to do. I don’t think it helps to ignore or downplay the potential loss that’s out at play, in the cards on the table.

I agree there’s a big difference between where we could be with the right effort and where we could quite likely be without it.

1 Like

Yeah it seems the stack-cabal-wars simmered down, but then many of the stack people also basically left for Rust. First part great, second part not so great!