Head.hackage usage, rationale, and problems

Well, I don’t know personally, but @angerman thinks it’s not only happening, but happening so much that it is lowering the quality of the ecosystem.

That would basically just be a centrally maintained “project file” with all those source-repository-package stanzas, that you can import:


It’s not clear to me how this would be an improvement over the status quo. To the contrary, it seems to make your concern worse: Auditing the patch-set would require checking out dozens of distinct repositories whereas today they are found centrally in the head.hackage repository.

I wholeheartedly agree with this. With better collaboration tools I think the need for head.hackage (both social and technical) would be greatly reduced.

For various reasons:

  • easier to use/configure: one import line in my cabal.project. No dealing with repositories or running cabal update.
  • I can see the origin of the patches very clearly and often know just from looking at the repo url if it’s an upstream patch or not. Otherwise I can easily see on gtihub if it’s a fork and follow the the links to the upstream repo. head.hackage doesn’t bother putting any of that information in the patch header.
  • I can see what packages are changed more easily. I don’t have to look up the head.hackage repo.
  • it’s not a moving target


  • extracting the exact patches for all changes is likely non-trivial
  • I’m not sure how exactly in-place updates to remote project files work. I remember a discussion I had with @sclv about this on the PR that introduced imports. I think you’d have to delete the downloaded file from dist-newstyle/
1 Like

I think it’s a shame that the GHC.X.hackage proposal didn’t get more traction, I thought it was a good idea. One suggestion I made at the time that Moritz might like was that we could potentially sunset a GHC.X.hackage repository after some time period, which would give people a good incentive to get off it as quickly as possible.

head.hackage is a pool of patches from different authors and I as an end-user have no easy way of knowing whether those patches are…

Another idea that occurs to me is that maybe we could make the information that head.hackage packages are not created by the maintainers visible to cabal. Imagine, say, an “unofficial” metadata tag that could be attached to package versions; then we could have controls in cabal to allow these, or allow them only for certain packages, etc. That would allow you to only fetch unofficial packages from head.hackage that you actually need, rather than allowing just anything that might be in there. E.g. allow-unofficial:text would allow cabal to pick an unofficial version of text and only text.

(Alternatively, some kind of finer-grained way of specifying which packages can be resolved from which repositories? active-repositories: hackage, head.hackage(text)?)

I’m not sure how exactly in-place updates to remote project files work.

Any solution here is going to be stateful: at least you need to add new patches from time to time. head.hackage is a cabal package repository and at least makes that statefulness explicit. In particular it can in principle support index-state (there’s a PR to use @andreabedini 's foliage tool which would enable that), which means that you can use it reproducibly. Having a project file that was updated in place loses the ability to control the state there, which seems worse to me.

Well, I don’t know personally, but @angerman thinks it’s not only happening, but happening so much that it is lowering the quality of the ecosystem.

I will note that I have only very rarely seen head.hackage used in the wild. The cases I can think of are:

  • HLS has sometimes used it, but HLS tries very hard to get something that works with a new GHC quickly. We are also very proactive at getting rid of such things - we really want HLS to be installable just from Hackage.
    • This is a problematic usage! If we build HLS binaries with head.hackage then we are exposing end-users to any questionable stuff in the head.hackage patches, not just developers! So I’m not super happy about this actually.
  • Some IOG things have used it, but again I think that’s mostly been a consequence of e.g. trying to use very new GHCs in haskell.nix.

I’m not sure. Most people are not very comfortable with hackage repos and the idea probably stems from source distros like gentoo, where you can freely mix repos.

The idea, IMO, is that repos are “rolling release” in that sense, so having a frozen repo is kind of odd and counter-intuitive.

The ergonomics of dealing with repos is not really there. You should be able to very precisely define a priority for them (in case package versions appear in multiple) and also tell via constraints that you want a package from only a specific repository (in gentoo that’s done via pkg::repo) or ignore certain packages from a repo.

Yes, I won’t mind implementing re-fetching logic for remote imports. It isn’t hard. GHCup does something similar.