Empirical evidence of Haskell advantages

As Haskell programmers, we have an intuitive idea that developing with Haskell:

  1. produces code that is more correct
  2. produces code that is easier to maintain
  3. has productivity benefits

Does anyone know of any research or other empirical evidence to validate this?


I recall this paper on prototyping.

Mind you, even for less complex questions (e.g. is Dvorak better than Qwerty? Is keyboard interface faster than clicking with a mouse?) there are a number of paper, rebuttals, anecdotes, critiques, blog posts stating anything and everything.

1 Like

I don’t have a link to the empirical evidence of Haskell superiority but I would like to recommend everyone read the following blog post by Dan Luu where the author criticises 16+ existing research papers about the benefits and drawbacks of static typing:

The Ranking Programming Languages by Energy Efficiency paper shows that Haskell is somewhere in the middle in terms of performance and energy consumption.

On a personal note, I’m currently not really convinced that developing with Haskell produces code that is better (more correct, easier to maintain, has productivity benefits, etc.).

I’m absolutely in love with the ability to refactor code confidently and worry much-much less about my refactoring breaking the code :heart:

But it was difficult for me to feel enthusiastic about using Haskell when I spent 2 months debugging a space leak without ever finding a reason for it :disappointed_relieved:

At the end of the day, Haskell is better in some places and worse in others. I just hope that it’s worse only in those places that can be fixed, and with time those places actually will be fixed.


It’s hard to imagine data that would answer this question. If Haskell programmers perform better than Python programmers, it might be the language, but it might be the programmer. If a programmer does better in Haskell, it might be because they know Haskell better.

The most objective data seems like it would be a tournament – but the productivity of champions is not necessarily very informative about the productivity of average laborers.


Disagree. I often see correctness more about how good low-level primitives are. We have a lot of issues with those and with fundamental building blocks that were simply bad decisions. Lazy IO, String, encoding issues, custom windows posix layer, etc. Some of these are being worked on, but other languages got these things right very early.

That depends how you use it. There are some professional open source projects that can give you an idea what happens if you allow your engineers to do whatever they want.

Bus factor starts converging to 1 and “easy to maintain” is suddenly only true for the original authors of the architecture.

But generally, I agree.

Somewhat, but you can also navigate yourself quicker into technical debt, because changing code isn’t always easier (e.g. when it’s about how to express effects or do streaming, etc.). Decisions in strongly typed languages tend to have more impact and are harder to revert.

Additionally, from a managers POV, productivity is a function of the team. And that directly correlates with hiring and employee market.


I was asking the same question on Reddit some time ago. It did not go well for me on the spiritual level, but some curious research was found.

Summary of hard research:

The summary linked by @ChShersh mentions some of these. Next time I have a question like this I guess I should better ask him first!


I’d like to take a second to acknowledge that this forum, compared to some others, is a really good place to ask difficult questions like this. Thanks, all who make it this way.


There is no real data on this, and likely never will be.

The size of the codebase, the development effort involved, and the experience required for a meaningful empirical test. Experienced developers are too busy paying their bills to spend 1,000 hours developing a toy project.

I’ve seen some tests carried out at universities, but no matter how large the projects are, they don’t involve experienced developers. I’ve gone through many studies and have always been disappointed.

Maybe a wealthy company like Apple or Google might conduct a large-scale study as the results could yield immediate real-world gains for them.

It probably would have been a useful exercise for Twitter employees before they were nixed.

That being said, pretty much every language is good enough to use to make something useful.

The desire for increased productivity is such an industrialized way of thinking about coding (especially when it’s for personal projects). What’s most important is mindful consistency.

You’ll get far more done by committing to completing projects than obsessing over benefits.

Pick a language you like. Make something you want to exist. Enjoy.