Publishing my first package in hackage

I recently published a very small package, partially to learn a bit more of the ecosystem. Although I succeeded I wanted to share some paper-cuts in the process in case they are worth improving from a developer experience for new-comers. I’m not sure if this is the right place to discuss them.

I used cabal-install 3.10.3.0 and ghc 9.4.8 since those are the recommended versions at this time.

The cabal init generated a build-depends: base ^>=4.17.2.1.

I uploaded a candidate package as suggested. Everything looked good enough. The first thing that struck me here is that the uploaded candidate was not getting documentation. I wondered if that was a limitation of the candidate workflow or if there was something wrong with my package. Some other candidates seem to have documentation but it is not clear to me if those were manually uploaded.

I proceeded then to publish the real package at 0.1.0.0. There was no read flag in the candidate upload from what I could see.

After publishing I got a build error. Having base ^>=4.17.2.1 as a restriction seems that is too restrictive for hackage, but this was created by cabal init using all recommended versions.

Resolving dependencies...
Error: cabal: Could not resolve dependencies:
[__0] trying: tinyapp-0.1.0.0 (user goal)
[__1] next goal: base (dependency of tinyapp)
[__1] rejecting: base-4.18.1.0/installed-4.18.1.0 (conflict: tinyapp =>
base^>=4.17.2.1)
[__1] skipping: base-4.20.0.1, base-4.20.0.0, base-4.19.1.0, base-4.19.0.0,
base-4.18.2.1, base-4.18.2.0, base-4.18.1.0, base-4.18.0.0 (has the same
characteristics that caused the previous version to fail: excluded by
constraint '^>=4.17.2.1' from 'tinyapp')
[__1] rejecting: base-4.17.2.1, base-4.17.2.0, base-4.17.1.0, base-4.17.0.0,
base-4.16.4.0, base-4.16.3.0, base-4.16.2.0, base-4.16.1.0, base-4.16.0.0,
base-4.15.1.0, base-4.15.0.0, base-4.14.3.0, base-4.14.2.0, base-4.14.1.0,
base-4.14.0.0, base-4.13.0.0, base-4.12.0.0, base-4.11.1.0, base-4.11.0.0,
base-4.10.1.0, base-4.10.0.0, base-4.9.1.0, base-4.9.0.0, base-4.8.2.0,
base-4.8.1.0, base-4.8.0.0, base-4.7.0.2, base-4.7.0.1, base-4.7.0.0,
base-4.6.0.1, base-4.6.0.0, base-4.5.1.0, base-4.5.0.0, base-4.4.1.0,
base-4.4.0.0, base-4.3.1.0, base-4.3.0.0, base-4.2.0.2, base-4.2.0.1,
base-4.2.0.0, base-4.1.0.0, base-4.0.0.0, base-3.0.3.2, base-3.0.3.1
(constraint from non-upgradeable package requires installed instance)
[__1] fail (backjumping, conflict set: base, tinyapp)
After searching the rest of the dependency tree exhaustively, these were the
goals I've had most trouble fulfilling: base, tinyapp

I then proceed to publish a couple of new versions until the build succeeded. I settle on base >=4.17.2.1 && <5.0 and luckily a couple of hours later the docs were also built. In hindsight maybe I should have edited the package metadata instead of publishing new versions.

What I would have liked to have:

  • For cabal init to generate something that is compatible with hackage build process. Not sure if this is more on cabal or on hackage side.
  • In candidate upload know if the dependencies were in good terms with hackage build process
  • In candidate upload have clear expectation if docs will be built or not
  • When uploading a package that only differs in metadata with respect latest version, warn/reject it unless the bump is forced. This is more to avoid increasing the index in such situations.
  • Be able to know how far in the different queues the package is.

Before submitting issues to either cabal or hackage I wondered you think about this and if they are improvements worth pursuing.

17 Likes

I think all of your points make sense. To explain a bit what’s happening here:

  • cabal init generates a cabal file compatible with the version of GHC you’re using. With GHC 9.4, it will use bounds appropriate for that
  • The base package has a fixed version for a given version of GHC. So cabal init generated a cabal file with a base version that JUST matches 9.4
  • The hackage server only builds with the three latest GHC versions, I believe. So 9.10, 9.8, 9.6. All of which have newer versions of base

IMO this is the correct behavior; cabal init should generate bounds that work for the compiler you’re currently using. It’d be very weird if cabal build failed after a cabal init. Perhaps one point of improvement here is for cabal init to prompt for versions of GHC to be compatible with.

Thanks for acknowledging. I wasn’t aware that hackage server has a three latest ghc versions rule.

I see how the current behavior is correct. And how ghcup, ghc, hackage, and cabal are independent. But the collective behavior seems off to me.

I think that, if cabal init would continue to emit base constraints that only work for the mayor ghc version, then it would be ideal for hackage to support at least the current recommended ghc version. Ideally even one more to allow some additional time window for users to migrate.

My other suggestions still hold but having some stronger alignments between the tools will make things easier for newcomers I believe.

1 Like
  • Please declare proper tested-with field and generate GitHub Actions with haskell-ci. If you do it prior to publishing, (in)compatibility with different versions of GHC would not be a surprise.
  • The preferred way to update metadata such as version bounds is to make a revision, e. g., Edit package metadata for tinyapp-0.1.0.2 | Hackage.
  • If Hackage builder gets picky, you can generate Haddocks locally and upload them manually: cabal haddock --haddock-for-hackage && cabal upload --documentation.
2 Likes

Hello Brian,

indeed this is definitely poor UX. Can I ask you how you installed GHC/cabal? And when?

If taken in isolation every tool does the right thing but together they confuse the user is suboptimal, there is space for improvement, even though I can’t pinpoint where.

2 Likes

I think there’s definitely some room for improvement in coordinating on what is the oldest version of GHC that our tools should support. At the moment everyone makes this decision more-or-less independently, which can lead to this sort of situation.

I usually work with projects that have ghc and cabal provided by nix. About a month or two ago I installed ghcup from brew to start from scratch as a newcomer would probably. My current version is 0.1.22.0 which is the latest still.

I wanted to have a stable environment from scratch where hls will work out of the box. I decided to stick with “recommended” version of cabal, ghc and hls. Without knowing exactly what is the criteria for branding something as stable I assumed that they would work together. And that has been the case :-), until my attempt to publish a package and things seemed not aligned anymore.

@Bodigrim thanks for the suggestions. I agree that those are good practices to follow. But they are hard to discover unless someone tells you about them. Again, my goal was to learn a bit more of the ecosystem so doing the minimum required to publish a package was my goal.

1 Like

FWIW I very much agree that cabal init behaviour is unhelpful here. People almost never want to limit base to a single major version. I think the right mechanism for cabal init would be to offer a choice between base >= 4.17.2.1 && < 5 (default, but marked as “non-PVP-compliant”) and base ^>= 4.17.2.1 (not default, but marked as “strictly PVP-compliant”).

I’d say that setting up CI falls under “minimum required” in my books. While the issue with mismatching base versions can admittedly be improved to a certain extent, in general you cannot really expect Hackage to be aligned with your local setup. Try haskell-ci, it’s super easy to use.

2 Likes

I’ve had a few times where I was switching from the latest GHC release back to an earlier version and then had to loosen base bounds of packages I created in the meantime, so I’d be in favor of going further and just defaulting to no bounds on base. Edit: and let the author add their own bounds when the time comes to publish the package. I believe cabal upload or at least cabal check already warns about missing bounds.

Well, there is a tension between convenience and long-term sustainability. The usual argument against putting of an upper bound is that you don’t know the future and cannot determine which major release of a dependency will finally break you package, thus to avoid making revisions as-you-go you’d rather make a single revision once it breaks. This reasoning is flawed, but nevertheless. However, you cannot make such argument against putting a lower bound: you can always find one upfront.

I’m not sure I’d call it flawed.

It just means that maintainers who omit upper bounds on base have to check all their package versions on a new GHC release.

However, I’d argue you are supposed to do the same when having upper bounds, otherwise you’re restricting possibly sound configurations (which is as bad as allowing unsound configurations).

If GHC had proper release candidates for all releases, there wouldn’t even have to be a lag (assuming responsive mantainers).

In the scenario of responsive maintainers and GHC release candidates, the choice between those two options is not very significant and they’re both valid.

So the interesting question is: what is the difference assuming unresponsive maintainers?

I understand the topics intersect, but I say: let’s focus on this specific user UX rather than PVP in general.

I believe with a mimium of coordination between projects this could be avoived regardless of our stance on PVP. Most likely cabal check could warn about this too.

But maybe there are even simpler or more ergonomic solutions.

6 Likes

Something I realize now I don’t know is: What are the implications when the package is successfully built or not?

When the build badges of a package says “Build: PlanningFailed”, it is still uploaded and can be used as a dependency just fine from what I can see. The docs will not be built, although I can manually upload them.

Will hackage check when dependencies have new version if my package still build?

When new dependencies are uploaded, hackage lets you opt-in to email notifications that warn if they are out of range for your package. However, it does not replan or rebuild existing packages in such a circumstance.

A package building on hackage is precisely what you’ve observed – the docs and test coverage are made available. A package failing is an indication that with the specific ghc version of the hackage builder, the package fails to build.