That’s a great approach, and ought to be the standard! You could probably easily integrate cabal-plan-bounds
into this setup to compare the bounds against what you actually test against, to then either prune the bounds, or add new jobs as needed to fill the holes. And also to check that the upper bound is actually reached. I can assist, as I am interested in more early adopter feedback.
I think your example demonstrates a need to be able to declare that you’re using a package in a way that makes even its major updates likely compatible. You should be able to say something like text ^^>= 0.2
meaning I can work with all existing text
versions after 0.2, and if in 50 years version 99.0 turns out to be incompatible I’ll add a hard upper bound.
That’s essentially >= 0.2
.
It is? Good, I’ll stop feeling guilty and start using it more often.
Please don’t. The policy for requiring upper bounds has very good reason, based on the curation experience of many package maintainers and hackage trustees over the years. Its essentially the core point of the PVP. When a package fails due to a dependency changing, this can be confusing to diagnose and disruptive to the ecosystem, especially non-expert users who would not have a clue this is what is going on. However, when a package fails due to a solver failure on an upper bound, the message states clearly (well, relatively clearly, and its ongoing work to improve this) that this is the cause, and further, the remedy – test with a relaxed bound and if successful, request a revision bump – is straightforward to implement.
Whenever a new ghc (with a new base) comes out, there’s a cavalcade of failures across hackage, and the whole community springs into action to resolve these. Hard-won experience shows that this is a much less painful process when upper-bounds are already in place.
It does require sometimes maintainers have to bump bounds they otherwise would not have to. However, the alternative – that all hackage users potentially have to realize that a failure is even bounds related at all, and then have to bisect back to which bound would make sense, if any – is much worse.
My PR makes this a recommendation rather than a requirement. I find that more reasonable spec-wise.
I don’t think that makes a difference. PVP compliance itself is not required, but instead specified on hackage with the RFC-meaning of should. Since we only have recommended/encouraged adherence to the PVP as a whole, there’s no reason to weaken the PVP directly as well.
The spec indicates very clearly that this may change:
At some point in the future, Hackage may refuse to accept packages that do not follow this convention. The aim is that before this happens, we will put in place tool support that makes it easier to follow the convention and less painful when dependencies are updated.
No worries, as
so it won’t happen anytime soon
Yeah… and I think all this speak is not very spec-like and belongs into faq or some other soft document. Not in the spec.
After seeing and helping with the fallout from the (entirely correct and PVP-compliant) release of aeson-2.2.0.0
, I’ll add another vote for “please don’t”. Cleaning up the mess involved (for several packages) downloading every release they ever made, grepping through for uses of import Data.Aeson.Parser
(which moved to attoparsec-aeson
; thank god there was an easy thing to test for), and then revising many Hackage releases so the old versions don’t get pulled into strange build plans.
I think there are some misunderstandings:
- hackage doesn’t enforce PVP at the moment anyway: it’s questionable whether the proposed change would have any real world impact, except for tooling that uses the PVP spec. It’s still recommending to use upper bounds.
- Actually, I find it highly questionable that the spec talks about ecosystem issues at all… and not only that, it also talks about a specific ecosystem. That makes it a poor spec. It’s a version spec and shouldn’t talk about how to handle your dependencies at all other than giving you expressive syntax to describe them
- hackage or any other ecosystem can easily enforce a superset of the spec. That’s entirely reasonable.
- Why would I use upper bounds on bytestring if all I’m importing is
pack
? What is the reason to have the maintainer play whack the mole on upper bounds in this case?
This is the first I have heard of such a thing!! Starting in Haskell in 2010, the upper bound requirement always felt to me like an idea based on theory rather than practice, and that’s what made all of its shortcomings seen even more regrettable. All I’ve ever known are experience reports with the upper bound requirement, which are universally negative.
If there are painful experience reports of a time before(?) the upper bound requirement, I would love to hear them. It might go a long way towards making it more palatable.
If there are painful experience reports of a time before(?) the upper bound requirement, I would love to hear them. It might go a long way towards making it more palatable.
The article motivating the pvp was published in 2005, right as cabal and hackage were really coming into being. The first pvp page on the haskell wiki reached a recognizable form in 2007: Package versioning policy - HaskellWiki
Here are all 15 packages that were on hackage in 2006:
https://web.archive.org/web/20060303165044/http://hackage.haskell.org/ModHackage/Hackage.hs?action=view
What I will note is that we do have experience with upper-bounds and not-upper-bounds that is extensive nonetheless, because there have been plenty of packages on hackage that do not follow the pvp with regards to upper bounds, and we have experienced the difficulties in keeping things working with them.
As jackdk describes, if a package has many releases, and no upper bounds, then a new release of a dependency that causes breaks necessitates adding upper bounds revisions to all prior versions as well, lest the solver fall back to them. In the converse case, it requires relaxing upper bounds on at most the few most recent releases.
Most end-users do not see the pain of this, and only the maintainers who do not put upper bounds do, as well as the trustees who have to go fix the ecosystem when problems occur. We never did create a database of all the reported issues to run quantitative analysis on, but the old reddit threads arguing about this stuff all had a fair accounting of stories as I recall (and also covered most all of the discussion here, repeatedly).
A sampling:
- https://www.reddit.com/r/haskell/comments/ydkcq/pvp_upper_bounds_are_not_our_friends/
- https://www.reddit.com/r/haskell/comments/gf7uw8/on_pvp_and_restrictive_bounds/
- https://www.reddit.com/r/haskell/comments/2m14a4/neil_mitchells_haskell_blog_upper_bounds_or_not/
- https://www.reddit.com/r/haskell/comments/1ns193/why_pvp_doesnt_work/
Skimming btw, I see a number of proposals to improve things have been already implemented (including something like the carat operator, as well as the --allow-newer flag [which can be very granular in project files]).
Ideas for how to collect more quantitative data on the effects of different bounds policies are very welcome – and indeed one of the threads has some pretty neat analysis derived from the 01-index.tgz hackage tarball and analyzing metadata revisions.
Here is an alternative angle that would allow the haskell community or hackage to create their own policies and take away the burden from PVP maintainers to decide on such far reaching consequences: [RFC] Make PVP ecosystem/language agnostic · Issue #54 · haskell/pvp · GitHub
@jackdk’s comment is an example of painful experience due to missing upper bounds, isn’t it? Or did I misunderstand your question?
You’re right, it is.
So am I correct in understanding that the trade-off is, at the extremes,
- An author releases a new major version of a package for reasons other than a breaking change. No downstream user can use it until any/all intermediate packages are revised to mark compatibility.
Versus
- An author releases a breaking change in a new major version. Any current or old version of an intermediate package will satisfy the build plan, but will fail to produce correct results.
(Of course, most situations are not at either extreme, i.e. releases are made for a mix of reasons and breakage only affects some percentage of consumers. I think a full analysis would need to account for the factors to be accurate, but I just want to be sure I understand the fundamental trade-off first.)
@chreekat a cost-benefit analysis of upper bounds from the viewpoint of Hackage Trustees is available at https://github.com/haskell-infra/hackage-trustees/blob/master/cookbook.md#best-practice-for-managing-meta-data.
(cross-posting from the GitHub issue linked above)
IMHO the essence of upper bounds is this: being able to compile an old unmaintained package should be a no-op.
Of course using a newer GHC might not be possible, but with upper bounds one is guaranteed to have at least one build plan that works, for some version of GHC.
As a test, someone tell me what dependency versions I need to make wai-middleware-authwork. It’s not a trick, I did spend a couple of days trying to figure it out and gave up