Is there a reference of best practices?

We have style guides (and even automatic formatting tools), hlint, and the Haddock standard for comments. These together ensure that Haskell is uniformly good-looking on the level of syntax. We also have a bunch of warnings (some of which are off by default) that catch many blunders, such as incomplete patterns.

However, there are also high level best practices that one really ought to follow in order to take the best advantage of Haskell. Some examples:

  • If you need to acquire a resource and then release it, you should do it with a bracket. (Why?)
  • If you are catching all exceptions, you should re-throw asynchronous ones. (Why?)
  • Whenever you are mapping, folding or unfolding, you should do it with higher order functions. (Why?)
  • Whenever theoretically possible, a type should have exactly the right set of values, so that code is correct by construction. (I think this is the first source?)
  • You should parse, not validate. (I recall the opposite is called Ā«boolean blindnessĀ»?)
  • You should know which functions in the standard libraries are safe and easy to understand, and which are dangerous and confusing. (There is some work to codify this.)

These best practices are disparately mentioned here and there. If I forget some of them (and I know I do), there is no way for me to refresh my memory. Further, it is hard to explain to a programmer with a Ā«traditionalĀ» background how Haskell is different ā€” it is very much possible to write stringly typed imperative code in Haskell!

How much better life would be if all known best practices were codified!

If there is a reference for this, please link me to it. Otherwise, please throw your best practices in comments and I shall compose that reference from your suggestions and put it on the Internet.

8 Likes

Production Haskell is one such reference of best practices.

2 Likes

The old haskell-lang.org site had some good content on this topic, though that has since been moved to the Applied Haskell Syllabus.

Personally, Iā€™d like to see this type of content included in an authoritative guide for onboarding, up to intermediate and recommended reading to become experts. Iā€™d like for that to be community-supported, and hosted on haskell.org.

Before someone says ā€œwhy donā€™t you do thatā€: This is possible/feasible, but agreement among various parties seems to keep blocking that or burning out contributors who push in this direction, so there would need to be a larger group of people working together to make this a reality.

2 Likes

Does not seem to be available even for money at this time. (Says will be published some time soonā€¦)

Maybe @parsonsmatt can chime in.

Yeah so, the book was picked up to be published by Manning and as a result Matt took it off of leanpub. According to the projectā€™s twitter, the book very recently finished its first round of reviews and he hopes to get it into the Manning early access program soon. I am very much looking forward to reading it when it comes out but Iā€™m in the group of, I assume quite a few, people who only heard about it after it was picked up by Manning and therefore didnā€™t get the chance to check out the early versions on leanpub, and am therefore just waiting to hear anything about it. When it comes out I am sure that it will be a great solution to the question this thread is asking but right now I donā€™t think itā€™s really a good choice of material to point to, given that nobody new can get access to it for an indeterminate amount of time.

Btw apparently the deal with Manning didnā€™t work out so Matt put it back up on Leanpub.

There is also a tool called stan that works somewhat like hlint but on the level of meaning, not spelling. It is integrated into haskell-language-server as a plugin since recently.

1 Like

I do not anymore believe this. There is a trade-off here, as follows:

  • More precise types are safer to work with, but harder to adapt to changing requirements.
  • Less precise types are more dangerous to work with, but easier to adapt to changing requirements.

For example, say you are writing a chess game. You can represent your chessboard as a function š”¹ā¶ ā†’ piece ā€” that is to say, (Bool, Bool, Bool, Bool, Bool, Bool) ā†’ piece. Here, the first three bits denote the rank and the other three denote the file. There are exactly 64 elements in the set š”¹ā¶, so this is a perfectly precise type. But representing your battlefield like this paints you into a corner.

  • What if later on you wish to pivot to international draughts? Now you need an index set of size 10 Ɨ 10 = 100. It is going to look nothing like your š”¹ā¶, so you will have to modify a lot of code. Maybe you write a type Decimal = 0 | 1 | ā€¦ | 9 and represent your battlefield as (Decimal, Decimal) ā†’ piece.

  • What if later on you wish to pivot to 3-dimensional chess? Should your type have been Vector 2 Decimal instead, so that you can make it Vector 3 Decimal? Well, now you need generalized algebraic types to encode the dimension on the type level.

Many everyday types are simply beyond the Haskell type system. For example, what is the type of legal moves on a given chessboard? It depends on the value of the chessboard. So, it is a type that depends on values. So, it needs dependent types. There is nothing you can parse this into ā€” you can only validate.

This is maybe a trivial thought, but I must repair my own publicly presented misconceptions, so let it be posted.

5 Likes

That reminds me of Rich Hickeyā€™s Maybe Not talk. His main argument (no pun intended) is that changing the type of an argument from A to Maybe A is an unnecessarily breaking change. We could just always use Maybe everywhere, but that would be imprecise.

On the other hand, I wonder if the problem would go away if we had better refactoring tools. Then we can both be precise and easily change our programs.

3 Likes