Dependency version bounds are a lie

I think that this, or something like this, is indeed the next level of sophistication, but that it’s also a sizeable step, so I guess that’s why nobody has tried it. It could be that just now the time is ripe for it – so give it a shot!

Until then I’m hacking away on my little build-plans-to-version-deps tool :slight_smile:

1 Like

Ok, here is a a first prototype, showing what I think is a good way to approach the problem:

$ cabal build -w ghc-9.2.4 --builddir dist-9.2
Build profile: -w ghc-9.2.4 -O1
…
$ cabal build -w ghc-9.0.2 --builddir dist-9.0
Build profile: -w ghc-9.0.2 -O1
…
$ cabal run -v0  cabal-plan-bounds -- -i dist-9.0/cache/plan.json -i dist-9.2/cache/plan.json  -c cabal-plan-bounds.cabal 

Now the tool has updated its own .cabal file (how meta), with the following diff:

$ git diff
diff --git a/cabal-plan-bounds.cabal b/cabal-plan-bounds.cabal
index 8e5dce8..bee19b1 100644
--- a/cabal-plan-bounds.cabal
+++ b/cabal-plan-bounds.cabal
@@ -21,11 +21,14 @@ executable cabal-plan-bounds
     main-is:          Main.hs
     other-modules:    ReplaceDependencies
     -- other-extensions:
-    build-depends:    base, Cabal-syntax, cabal-plan,
+    build-depends:    base ^>=4.15.1.0 || ^>=4.16.3.0,
+                      Cabal-syntax ^>=3.8.1.0,
+                      cabal-plan ^>=0.7.2.3,
+                      optparse-applicative ^>=0.17.0.0,
+                      containers ^>=0.6.4.1,
+                      text ^>=1.2.5.0
       -- a comment in the middle of te build depends
-      optparse-applicative
-      , containers, text
-    build-depends:   bytestring
+    build-depends:   bytestring ^>=0.10.12.1 || ^>=0.11.3.1
     build-depends:
     hs-source-dirs:   src/
     default-language: Haskell2010

You can see that

  • It left indentation and comments in place (but comments within a field are moved to the end of the field)
  • It left the build-depends as they were – one field or multiple fields, including the order of the build depends.
  • Every build depends now has the version range indicated by a disjunction of ^>= constraints, corresponding to the passed build files (and I could pass more, of course)
  • The build-depends field value is pretty-printed as Cabal would do it, commas at the end, one per line. (I’d leave it to a separate cabal file formatter to clean this up if you don’t like it).

That’s it, I’d say, the rest is “just” polishing of the tool itself (error handling, tests, the usual stuff). But not today. I dumped the current state of affairs here if someone (@tcard) wants to play around with it:

4 Likes

Ok, I kinda like where this is going, so now

Let’s see if this catches on…

3 Likes

My small contribution to this space is Immutable Publishing Policy, which I am using for the only package I maintain anymore which is Lucid. If you’re using lucid for the versions of lucid since I adopted this policy, then you never need upper bounds, as nothing will ever break.

Indeed, for lucid2 which is “IPP-native”, all versions are simply dates because there are no breaking changes.

My modest proposal would be that for packages that are mature and have crystallised, like lucid, it would be practical, helpful and easy to adopt IPP and, at least for those packages, end-users can finally relax.

This approach is more asking maintainers to stop breaking (which is the correct word) things, rather than trying to build tooling to “manage” the chaos created by maintainers. I may be a minority on this, although the good thing about IPP is that you can adopt it without bothering anyone else.

5 Likes

I’m rather sympathetic to this. It’s one of the more extreme views on the subject (in the literal sense: there aren’t many people who have proposed anything that goes beyond this – no moral criticism implied) but I think it actually has a lot of merit and we’d do well as as community to move some amount towards it.

1 Like

I also sympathize.

However, at least on package level (not module level), I believe you can execute this policy within PVP realm. Instead of copying the entire package, you just bump the major version and have long-running git branches.

This is the major issue with how PVP is lived today: no one bothers maintaining multiple major versions. IMO, if you break API a lot, you should also backport a lot.

2 Likes

TL;DR yet :grimacing: but I find my little stack-all tool useful for this kind of broader dependency coverage testing.

Anyway I think this is a great topic, though TBH I lose rather more sleep over overstrict bounds in our ecosystem than under: I think there is a insane about of time/busy-work wasted bumping over-conservative bounds, so if this discussion leads to improving that that would certainly make life better!

2 Likes

I hope my tool reduces busywork: at least you no longer have to edit .cabal files by hand to change bounds.

I do agree that what we (as a community) have built here seems overly complex, tedious and labor-intensive (but I do not know how I’d like this approach to be handled instead, let alone how to get out of the current situation).

I’ve spent unbelievably crazy amount of time in dozens of my open-source Haskell projects on supporting both cabal and stack, maintaining proper lower and upper bounds, compiling with multiple GHC’s (and maybe even multiple Stack snapshots), and fixing various bounds-related errors.

It was huge busy-work and led to massive burnout, so my current view is

:no_good_woman: I would prefer to do as less as possible such busy-work :no_good_woman:

:bulb: Remember, that almost all Haskell developers are volunteers who spend their free time on this kind of work. As much as I’d like others to maintain their bounds properly and care more about backwards compatibility, I can empathise with not having the desire to do more non-interesting work for free. I can’t ask others to do more so I can do less. It doesn’t work that way.

As a consequence, I’m not convinced that any solution that requires volunteers to do more (e.g. using an extra tool like like cabal-plan-bounds and building a potentially big project with multiple GHC versions locally to edit dependencies instead of just editing dependencies) will ever work on massive scale.

Asking people to perform extra steps only works for people who care deeply about some issue so they’re eager to invest more time or for people willing to pay money for doing this kind of work.

You may argue that cabal-plan-bounds actually reduces the amount of work due to the following claim in README, “You never have to edit the build depends manually again”, but in my view, it just replaces one kind of busy-work (editing bounds manually) with another (building project locally multiple times).


IMO, the status quo is unfortunate but not critical. I see several problems with bounds of packages in Haskell at the moment but w.r.t. to problems discussed in this thread:

  1. Incorrect lower bound: a package specifies bounds >= x.y && < a.b but doesn’t actually build with x.y (e.g. due to using features appeared only later and not testing x.y version)
    • As a consumer of this package, I can mitigate the problem by specifying a greater lower bound in my own config.
  2. Incorrect upper bound: a package specifies bounds >= x.y && < a.b but doesn’t actually build with a.(b - 1) (e.g. due to build tool not choosing this version when building the package)
    • Similarly than before, I can adjust bounds when consuming.
    • Although, if I’m doing this in my package and I really want to check that the new version is properly supported, I need to edit the .cabal file to allow only the latest version, build, patch and relax constraints again.

mtl maintainers are the same volunteers as everyone else. If they claimed that mtl can be built with trasformers == 0.6.* but haven’t actually tested this, nobody can blame them. Testing multiple build plans can lead to combinatorial explosion.

The main problem here is that transformers is a boot library that had some backwards incompatible changes, and mtl can’t just upgrade lower bound to use only the latest version of transformers. Fixing that would go a long way but it’s not an easy issue to fix, I agree :disappointed:


Facing all of the above could be annoying, especially when reading confusing build plan errors, but if we want to address this problem and make a noticeable impact, it makes sense to identify all bounds-related problems and think about the most user-friendly way to resolve them.

1 Like

Those are good points. But I think before figuring out how to make all of this as low-effort as possible, we first have to give those projects that do care about their bounds (regardless of whether they’re paid/funded) the right tools to do so.

Those are likely developers that maintain boot libraries or other critical packages like aeson, servant etc., where there are enough stakeholders and volunteers that additional maintenance burden pays off.

We can then go from there and see if we can develop automation for those steps.

2 Likes

This can be simplified a lot by

  • not using backports, but forward-porting (i.e., apply changes to the oldest version that should contain the change), and
  • automation to actually handle the forward-porting between version branches.

I applied this approach at 2 companies, where supporting/maintaining (very) old versions of the software was necessary. See https://dl.acm.org/doi/10.1145/2993274.2993277 and https://web.archive.org/web/20170707112140/http://www.scality.com/blog/continuous-integration-faster-releases-high-quality/

Happy to discuss more, if interested!

2 Likes

I am now using a rather nice CI setup with cabal-plan-bounds for a library of mine, and documented it at length here:

Maybe it inspires other to use the same.

I wish developers would not have to write long workflow files to get a convenient setup like this, but it’s a start.

2 Likes

Integration in haskell-ci with its Constraint-Set feature and, of course, Tested-With, would be really cool, and remove the need to have various ci-config/* files around (since these can be generated from Tested-With and Constraint-Set definitions as haskell-ci already does). This’d remove the need to manually write/maintain some GitHub Actions workflow.

I had the idea of setting tested-with with cabal-plan-source (so that it’s always correct), but using it to derive the test matrix is a good idea, too, thanks!

I did not now about the constraint-set feature, and can’t find documentation for it. Can you help me?

Integration of that idea with haskell-ci would be amazing! I use haskell-ci whenever I can (although I found it limiting whenever I need to do something slightly non-standard in my build scripts – the old problem of convenient framework vs. composable building blocks I think.)

Not sure how I learned about it, likely by reading sources. In one project, I have the following in cabal.haskell-ci:

Constraint-Set unix-2.7
  Constraints: unix ^>=2.7
  Tests:       True
  Run-Tests:   True

Constraint-Set unix-2.8
  Constraints: unix ^>=2.8
  Tests:       True
  Run-Tests:   True

Constraint-Set optparse-applicative-0.16
  Constraints: optparse-applicative ^>=0.16.1.0
  Ghc: ^>=9.4

Constraint-Set optparse-applicative-0.17
  Constraints: optparse-applicative ^>=0.17
  Ghc: ^>=9.4

Given this, in all jobs (for every GHC version), it’ll build/test the package with unix ^>=2.7 and unix ^>=2.8, as well as with optparse-applicative ^>=0.16.1.0 and optparse-applicative ^>=0.17 in the GHC ^>=9.4 build(s).

In my current setup, this ensures tests are executed with every (combination of) PVP major.major versions of dependencies I claim to support, akin to your ci-configs.

Very nice! I think you should be able to easily copy the second half of my suggestes workflow (collect build results and validate/update bounds) and combine it with yours.

Personally, I see some benefits to the directory-of-files approach, in particular that you can activate the configuration locally easily (just pass it to cabal using --project-file).

While I sympathize with the issue that this is trying to solve, I, as a nixpkgs maintainer, am quite afraid, that wide range adoption of this tool will actually make the ecosystem worse. The probability of finding a working build plan for different libraries (of which one might not have a very recent release, which is totally fine and very common) will just get harder when they constantly bump lower bounds. I think this might affect every cabal user who uses two libraries which don’t depend on each other, so yeah, probably every cabal user, and the burden on stackage and nixpkgs maintenance would increase a lot.

I really like the approach of trying to have wide ranges of bounds and test it with --prefer-oldest and --prefer-latest.

My prediction would be that widespread adoption of this scheme would decrease the ratio of non-broken Haskell packages in nixpkgs significantly.

I hope not!

I’d also like to test my packages against the package set as defined by nixpkgs (stable and unstable), I just need to figure out the best way of doing that (nixpkgs doesn’t by chance already generate a cabal config like stackage does)?

In addition, I think it goes well with this approach to have some jobs using --prefer-oldest or some other way to keep the range that’s tested large.

Sure works, Stash · NicolasT/landlock-hs@d7ba147 · GitHub

cabal-plan-bounds formats build-depends different than how cabal-fmt does it, though, hence several lines in the diff are not actual changes.

I am so excited for --prefer-oldest, i’ve been wanting that for a long time to verify lower bounds!! thank you!!

2 Likes