8 months of OCaml after 8 years of Haskell in production

But this is how the feature shoot-out trade-off table should really end:

lang research potential stability (∴ production use, comfort zone)
haskell more less
ocaml less more
cobol (sorry couldn’t resist :laughing:) none infinite

Lately I’ve been working a lot with Standard ML (so basically OCaml–) and one thing I absolutely miss is generic deriving. For the codebase I need to define basically a map function for every datatype, but without typeclasses and generic deriving I have to write all of this error prone boilerplate myself.
Related to this is newtypes, I miss them a lot. IMO deriving (via)/newtypes is by far the best haskell feature

What I like in Ocaml is that letrec is explicit, so i can easily shadow variables so the earlier assignment is out of scope in the rest of the function.


I’ve done so many times on various occasions. Yes, there’s progress on some fronts, but I believe that progress is often based on specific individuals (e.g. SPJ or Moritz pushing the stability efforts).

The problem is that it’s not primarily a technical issue, but one of aligning perception and defining priorities. So, software engineering approaches don’t work that well.

Some examples:

All these issues talk about perception, goals and priorities. These are in flux depending on the members of committes, maintainers of tools and libraries, as well as funding and clientele of some companies.

The end user experience is always an afterthought.


This is a great case in point of why I believe the situation is improving. I hope this doesn’t come across as impolite, because that is not my intention, but this is my perception: a couple of years ago stability was not on SPJ’s radar. He didn’t think about it and he didn’t deeply appreciate how important stability is to a language ecosystem. Now he does, moreover he is one of the biggest cheerleaders for stability! That is the outcome of a lot of challenging values-aligning work by several people (particularly the stability working group).


And I don’t want to sound like a groupie, but SPJ is special, not just because he’s the father of Haskell, but because he keeps surprising us with his way of reasoning, collaboration and listening to other peoples arguments.

But I’m not convinced that show’s a shift in community perception. I think some of the tension we experience is intrinsic due to the roots of Haskell and will never truly cease to exist. A language like “Go” does not have much of this tension, partly because the community perception is focused on “getting things done” almost uniformly and low appetite for experiments.

Haskell will always attract people with high appetite for experimentation and that will keep causing churn, tension and debate about stability, goals, complexity etc.

Yes, I believe the GHC SC stability document is a major step forward, which is why I’ve actively participated in those discussions.

And yet, my feeling is always that these things depend way too much on the support of individual people and the fact that we just got here after 33 years of Haskell tells a lot about (past) perception and priorities.

These things are nebulous and you really only experience them once you want to change something non-trivial. That is why I keep trying to push towards a “think about the end user first” approach. And I don’t think I’ve been particularly successful… it’s one thing to improve a library or tool yourself and a completely separate thing trying to change perception. You can’t work against the community perception for a long period of time.

So I guess my outlook is more pessimistic than yours.


In OCaml, there’s code generation enabled by preprocessor tools called PPX (one example is ppx_deriving). It’s not as powerful as TemplateHaskell (i.e. it doesn’t have access to types, only to syntax, so it’s more like Rust macros) but it works well in practice :relieved:

Instead of being type-directed, it relies more on naming conventions and incapsulation but I find this trade-off reasonable.

Related to this is newtypes, I miss them a lot. IMO deriving (via)/newtypes is by far the best haskell feature

OCaml actually has newtypes as well. You can define them like this:

type size = Size of int [@@unboxed]

I think that ppx deriving can leverage this to derive stuff differently. Not sure if it has been done already but I like that this can be handled on the library and the ecosystem level, removing the compiler from the pipeline bottleneck.


To summarize how things appear to me:

We both believe that “culture eats strategy for breakfast”. Substantial elements of Haskell’s culture make it difficult to implement sustainable improvements to user experience. That makes you pessimistic, because you know that culture is hard to change. On the other hand I’m optimistic because I believe the culture will change.

Does that sound like a fair summary?


What makes you think that?


How many years does haskell have left? This thread revived my worry of haskell becoming basically dead.

How many years does Haskell have left?

In the continuing absence of an all-new general-purpose non-strict programming language - many.


My concern is if that would be enough to tie enough people in haskell. What if non-strict evaluation comes to be considered ‘legacy’?

More points for lazy evaluation (2011) provides an informative example of what would be lost. But if that isn’t convincing enough, read section 1. Conventional Programming Languages: Fat and Flabby of Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs (1978) by John Backus to better appreciate just how dystopian a programming future devoid of non-strict semantics would be…


Yea, I mean every other parts of the world is going straight into the dystopian world. What would prevent programming from following the trend?

1 Like

Welp, maybe it fizzles out, maybe it gets huge traction. It all depends on what people do, so there’s no predicting.

If it helps, I’ve been involved in a little game called Garry’s mod that exists since 2005, and was released for purchase in 2006. It was just 10$ at the time. We’re in 2024 now and that game still has a huge community. People are still making awesome things in the Lua it supports. It’s not as old as Haskell, but it defied all my expectations.

The world of politics may be bleak, but I am convinced that Haskell will live a long and happy life yet. Disagreements about direction aside, I think we have a beautiful community with wonderful people. Everyone Is brilliant in their own way. The Haskell foundation is also doing amazing work.

There’s no predicting the future, but right now I think Haskell, and its people, are doing quite alright.


I don’t mean to draw parallels… but if it helps, look at PHP ? :slight_smile:

Conspicuous failure to die, and actually some significant and sustained improvement of long-standing weaknesses. It’s possible if enough people want it.


This heavily depends on your definition of “dead”.

In terms of GHC development, as long as there are meaningful changes coming out on a semi-regular schedule, it’s alive. Sure things take years to implement, but that’s because the contributor pool is small and every step is weighed accordingly.

In terms of community support, it’s complicated. Libraries generally only get released to die and any complex ones that stick around have grown into horrifying monstrocities, seemingly completely foundational to whichever ecosystem niche they occupy. Things here can change if enough people align in both understanding, time and space, so it’s impossible to predict when and how this may occur.

In terms of industrial adoption, it’s at best in a coma. I, a person with five years of experience in Haskell and nothing else, cannot in good faith recommend Haskell to a company because I do not see how they would profit from using it. The job market, after some spikes during the cryptocurrency bubble, seems to be mostly barren these days, and I have no idea how to find a Haskell-adjacent job for myself, let alone someone with less functional experience than me.

In general I have to agree with @atravers, Haskell occupies a rather unique niche of its own, so if it were to die someone would just come up with Haskell 2.


Yes, I think they focus on output with the limited resources they have.

And yet, dealing with GHC upgrades is a full-time job. Figuring out which GHC version is safe to use on all platforms without major issues is a full-time job.

For hobbyists and library maintainers, it becomes unnecessary churn. For companies, it becomes a serious monetary investment.

But this is nothing new. Much of that is described here: Leaving Haskell behind — Infinite Negative Utility

What I see is that some parts of the decision making entities seems to now understand that “stability” actually matters. But that’s just one instance of “end user experience”.

Which is why I think we haven’t made actual progress wrt perception and awareness. We just managed to push one item through that could have sunk the entire ship if we didn’t.

@rae proposed a framework of thinking about design decisions: https://github.com/ghc-proposals/ghc-proposals/pull/625#issuecomment-1868421227
Maybe this is too verbose and the problem is much simpler: developing an internal awareness of how decisions actually affect end users at all. Like switching the GHC windows toolchain to clang… this is a high-risk decision and caused lots of problems. There was no separate bindist with the old GNU toolchain. I could go on and on about these small little issues that seem very boring from an academic viewpoint. But they add up.

Yes, there are two parts to this:

  • lots of people use Haskell for experimentation
  • making your library work with 3+ major GHC versions can be an insane amount of work

Again: I think the “leaving Haskell behind” blog post also covers this.

I agree with that and I’m also considering to shift my professional career elsewhere. Despite years of industrial experience and lots of open source involvement, I don’t have the feeling that investing more time in Haskell is a good decision career wise. And my open source work has never landed me a proper job. Don’t expect that to boost your options. Most employers don’t know and don’t care. Prior experience, aced interviews and colleagues referring you are much more important factors.

I think the only time where picking Haskell as a startup may make sense is if you’re sure you will benefit from an isolated ecosystem that someone else is maintaining (say… the Cardano ecosystem).

Otherwise, you’re for sure going to end up maintaining or forking open-source libraries, so you better have engineers who are really good at that (maintaining libraries other people use is very different from writing a proprietary backend), but then you also become rather dependent on those special engineers (like @ChShersh ;)).

Stay open-minded. Programming doesn’t end with Haskell.


The reasons that I think the culture will change fall into two broad categories. The first category is that the community that is actually doing the work wants change. The vast majority of the Haskellers who I spend serious time interacting with (specifically, online on various fora like this one, and on repo issues and discussions) who are making contributions to the ecosystem are either explicitly agitating for Haskell to become more suitable for industrial development, or at least in favour of it. Although there’s a lot of inertia, and reorienting Haskell will be like turning a battleship, I don’t see any opposing force trying to turn Haskell another way. People resist changes to the status quo. But that’s fine: there are well-understood approaches for dealing with that.

The second category is my personal experience. Three years ago I had to literally join the Haskell.org committee to get a PR to the website merged. That’s how hard it was to make changes to the community. Since then I’ve been successful in resolving some systemic problems, most of which are people problems, not technical problems, for example automating deployments to the website (the “people” part was ensuring the review process was streamlined), working diplomatically behind the scenes to build consensus that ghcup should be the single recommended install method, and advocating for stability, partly via the Stability Working Group. When I joined the CLC I was probably the most vocal proponent of the importance of stability. Then a year ago three new members joined who are arguably more vocal proponents than me! People who want cultural change are entering positions of influence.

So, things can change, people want them to change, they’re willing to work towards changing them, and I’ve already managed to changed a few things and learned how to do it. That’s why I believe the culture will change.

Granted, I don’t have as clear a view into all elements of the ecosystem as you. I don’t know everything you know about the systemic issues affecting getting from GHC/HLS/Cabal source code into deployed software that works well for users, so maybe I’ve overlooked something, but at this point I’m very optimistic.


How about your opinion on haskell’s life expectancy? I want to know for how long would it last, in terms of at least 0.5~1% portion of the ecosystem being in a usable state. In this perspective, would it last more than 5~10 years?
(Please, I asked hasufell’s opinion, who seems to be fairly objective and knowledgeable around these topics)

It’s been there for at least 25 years, programming languages take decades (if not more) to die. Haskell is so far the only language that I know which is lazy and pure it might be a small niche but a niche nevertheless (whatever arguments against than niche are). Until something better comes in that niche I don’t see how Haskell could die in the next few years.