The evolution of GHC

Well, imho ‘global’ does not add much to the data point. It is still a single data point, and likely it is coming from certain surface. One possibility is that the entire pool of businesses on haskell has been decreasing, while haskell businesses have been gathering together in one particular group. Gives illusion that there came to be more haskell businesses.

I suspect they are happily earning a living, paying little attention to how the language and ecosystem evolve because they are sufficiently happy with how it is now and sufficiently happy with where it is going.

Granted, we can do much better in many things! But I don’t share your arch-pessimism, in fact I’m not pessimistic at all about the future of Haskell.


The notion that commercial Haskell has grown by less than 10x in 10 years just doesn’t ring true based on my experience. You’re welcome to interpret this anecdotal evidence as you see fit.


I mean, it is hard to believe when concrete statistics point the other direction.

There are tons of statistics indicating similar trends.

That’s a relative ranking. The languages that have overtaken Haskell have presumably grown even more than 10x. Swift didn’t even exist 10 years ago.


Please share, particularly if you can produce a summary analysis. I would love to see!

1 Like

In fact Haskell has dropped six places from 13 to 19 in 10 years. During that time four languages that didn’t exist 10 years ago have taken places above it (Go, PowerShell, TypeScript, Swift), all with huge corporate backing. The only language that existed ten years ago that has overtaken Haskell is R. In conclusion, I’m not particularly convinced to draw pessimism from this chart.

Well, it also fell in market share as well. At least:

I don’t think it is just github growing, it is certainly a way to measure market share. Perhaps programming as a whole grew to be huge, but it does not change the fact that haskell is lagging behind.

My claim is about absolute numbers of commercial users, not relative numbers of a mix between industrial, hobbyist and academic (which is what Github share of MAU shows). Furthermore, adoption of Haskell is so small, relatively, that measurements will tend to be dominated by noise.

I just don’t see a cause for pessimism about the state of industrial Haskell adoption. It’s flourishing! Granted, we want it to flourish more.


Hmm, I see. Yes, it is hard to find reasonable metric for haskell market size. I concur.
Maybe lots of haskell projects before were hobbyist, after all.
Also, I guess at haskell size, fluctuation is bound to be huge…

Btw, it is interesting how many articles consider haskell as dying, with likely lacking data.

Another anecdote:

I wonder if this link actually used the concrete measure, or just… TIOBE popularity ranking.

Probably they might need to be considered serious, as we haskellers would have a bias for haskell.

1 Like

Interesting discussion. I don’t actually find that I have much constructive to add, but I did want to answer this question:

Tweag has pushed, and continues to push, for linear types. I, who work for Tweag, am pushing for dependent types. Other folks at Tweag have somewhat separately identified that formal verification of software is a potential growth area. Right now, we’re focusing more on Liquid Haskell than dependent types as the best way of doing verification in practice. I believe that once Haskell has dependent types – and they have matured somewhat – using dependent types for verification will become more attractive, especially if we can work out how to connect the ease of Liquid Haskell with the power of full dependent types.


My pessimism stems from the brain-drain and fractured eco-system we experience. There is also a serious disconnect between various leaders in our community and the experience we have as practical haskellers, which I also see as a source of discouragement.

Anecdotally, ten years ago I could name every single corporate Haskell employer (and count them on just about one hand). I knew individuals at all of them, and knew the names of just about every Haskell using employee at them.

Now, every time I attend an event (rarely in the last few years, given the pandemic) I meet people from companies using Haskell that I didn’t know existed, or didn’t know were using Haskell, and often where I’ve never heard of any of the people using Haskell at those companies.

In my personal, anecdotal experience, but based on having interacted with Haskell communities and users for a long long time, we’re doing just dandy!


I agree to some extent that this thread is a little wishy washy, but I will contribute my thoughts anyway as a data point… I just found my first Haskell job, after a job search that had no intentions of being Haskell-only. By the end of the search I was turning down some FP and even some Haskell because there were more options than I could hope to follow up on. And this is with the constraint of being on the US west coast, and specifically the Bay Area, which is probably one of the worse places to find junior non-research Haskell.

Also I note here that the problem that really needs addressing is not that changes should happen more or less often, but that they must (with a heavy emphasis) be automated. Haskell provides a beautiful guarantee - referential transparency - that makes refactors potentially infinitely more powerful than other languages. I think that that should be the primary focus - automated migration tools and the like. I know it’s a lofty goal… but imagine that Haskell were the language where “when the community decides a mistake was made, it is automatically fixed”. Right now, Haskell is “when the community decides a mistake was made, it is fixed, and everyone suffers that fix”. If we could market as the “aggressively better” and “improvement for free” - that’s an incredibly powerful message. This is the difference between “Monad of no return lost me 5 hours and continuous dependency hell” and “Monad of no return made my code better and I loved it”.

I also don’t think Haskell’s message is “correctness” anymore, because of the less-pleasant memory situation and lack of dependent/refinement/termination types. Unfortunately I fear that Rust/Agda will take over correctness in the appropriate domains (systems and maximum verification). But Haskell still has incredible potential as a general purpose lang, and with dependent types, could be the language for hyper-maintainable software targeted at difficult problems with high assurance requirements.

Thanks for bringing this chat up - as much as we are all just spitballing here, this has provoked to reinvestigate some plans I had for this “global refactoring” tool I mention… So at least it provoked some volunteer hours from me.


Interesting perspective! I want to ask, How do you keep backwards compatibility with the augo migration tool?

Well this is part of the issue - is that we don’t have an expression based CPP replacement. At one point I cooked up a proposal for a replacement, and such a thing has been proposed by Matthew Pickering and others. The topic has at least been discussed in the literature - though I can’t comment on the extent/whether there is any impetus to get things moving.

This video links to some papers within the presentation. I will say that I don’t think we need to go that deep - we can get away with a better CPP without having full support. Also we can probably also get away with CPP for basic cases, and use variable assignments/type declarations for complex examples.

But I do wish we talked about these ideas a little more (I’m not claiming to be the first to mention these ideas, just that the GHC base conversation et al get a little more airtime) - the benefit of such tools is immediate and alleviates the frustrations of many parties.

Oh weird that talk is credited to @Ericson2314 ! Perhaps he can shed more light than I can :slight_smile:

I agree this would be great! However, we can’t build a tool for that step in the process if our process is the bigger part of the problem.

What I mean is, the primary improvements we can make here are WRT our process for rolling out change. ATM our process leads to the pain we’re looking to reduce. So we need to understand the deficiencies in the process and make changes that improve the experience in rolling out these changes. Let’s call that a “graceful migration”.

Right now, as a community, we don’t really understand how to do a graceful migration. This is the challenge to do something about.

Unfortunately, I don’t have enough experience with Haskell or the details of these changes to speak with authority on specific changes to make to the process (though I’ve seen others mention some of these points, however I haven’t been able to locate those threads/discussion again, and those people have checked out from this part of the haskell experience, so I don’t see those points being re-iterated).

ATM we can barely get ourselves out of this box, so DH/etc aren’t going to do much for us if we can’t figure this out.

1 Like

A CPP replacement with those features is a long way off. Working on GHC is currently too unproductive to justify the major investment. Maybe if a research group thinks they could get some papers published that would tip the scales, because PhD person-hours are cheap.

I opened Pre-Pre-HFTP: Decoupling base and GHC - #28 by Ericson2314 because I think that is much lower-hanging fruit. If we could use new base with old GHC, we could make changes to base faster without doing CPP nor running afoul of the 3 release policy, and that side-steps the issue.

Alright, I’m here again: I’ve had a breaking change of my own to deal with, which eventually lead to a change of OS…oh what fun :-/

I now believe it’s worse that that: we don’t know how to continually do graceful migrations, particularly at the rate of change/work/progress/etc. both Haskell and GHC, as projects, are hurtling along at:

  • do we just keep hurtling along while we wait for a solution to appear?
    • We could be waiting a while for that solution (if one even exists), and in the meantime more and more people are being frustrated to the point of seeking out other projects.

If the problem currently has no solution, that leaves mitigation as the only other option, to reduce the problem to a manageable (or tolerable!) size. There’s one simple way to do that: reduce the rate of activity in both projects.

To those who would be annoyed by such a slowdown: if we just keep on going and continue losing people to other projects, then we risk having a very abrupt slowdown, the kind that can end a project - just imagine how annoying that would be!

Then again, Haskell isn’t the only nonstrict functional language - there’s still Miranda(R) and Clean…

1 Like