Dependency version bounds are a lie

TL;DR yet :grimacing: but I find my little stack-all tool useful for this kind of broader dependency coverage testing.

Anyway I think this is a great topic, though TBH I lose rather more sleep over overstrict bounds in our ecosystem than under: I think there is a insane about of time/busy-work wasted bumping over-conservative bounds, so if this discussion leads to improving that that would certainly make life better!

2 Likes

I hope my tool reduces busywork: at least you no longer have to edit .cabal files by hand to change bounds.

I do agree that what we (as a community) have built here seems overly complex, tedious and labor-intensive (but I do not know how I’d like this approach to be handled instead, let alone how to get out of the current situation).

I’ve spent unbelievably crazy amount of time in dozens of my open-source Haskell projects on supporting both cabal and stack, maintaining proper lower and upper bounds, compiling with multiple GHC’s (and maybe even multiple Stack snapshots), and fixing various bounds-related errors.

It was huge busy-work and led to massive burnout, so my current view is

:no_good_woman: I would prefer to do as less as possible such busy-work :no_good_woman:

:bulb: Remember, that almost all Haskell developers are volunteers who spend their free time on this kind of work. As much as I’d like others to maintain their bounds properly and care more about backwards compatibility, I can empathise with not having the desire to do more non-interesting work for free. I can’t ask others to do more so I can do less. It doesn’t work that way.

As a consequence, I’m not convinced that any solution that requires volunteers to do more (e.g. using an extra tool like like cabal-plan-bounds and building a potentially big project with multiple GHC versions locally to edit dependencies instead of just editing dependencies) will ever work on massive scale.

Asking people to perform extra steps only works for people who care deeply about some issue so they’re eager to invest more time or for people willing to pay money for doing this kind of work.

You may argue that cabal-plan-bounds actually reduces the amount of work due to the following claim in README, “You never have to edit the build depends manually again”, but in my view, it just replaces one kind of busy-work (editing bounds manually) with another (building project locally multiple times).


IMO, the status quo is unfortunate but not critical. I see several problems with bounds of packages in Haskell at the moment but w.r.t. to problems discussed in this thread:

  1. Incorrect lower bound: a package specifies bounds >= x.y && < a.b but doesn’t actually build with x.y (e.g. due to using features appeared only later and not testing x.y version)
    • As a consumer of this package, I can mitigate the problem by specifying a greater lower bound in my own config.
  2. Incorrect upper bound: a package specifies bounds >= x.y && < a.b but doesn’t actually build with a.(b - 1) (e.g. due to build tool not choosing this version when building the package)
    • Similarly than before, I can adjust bounds when consuming.
    • Although, if I’m doing this in my package and I really want to check that the new version is properly supported, I need to edit the .cabal file to allow only the latest version, build, patch and relax constraints again.

mtl maintainers are the same volunteers as everyone else. If they claimed that mtl can be built with trasformers == 0.6.* but haven’t actually tested this, nobody can blame them. Testing multiple build plans can lead to combinatorial explosion.

The main problem here is that transformers is a boot library that had some backwards incompatible changes, and mtl can’t just upgrade lower bound to use only the latest version of transformers. Fixing that would go a long way but it’s not an easy issue to fix, I agree :disappointed:


Facing all of the above could be annoying, especially when reading confusing build plan errors, but if we want to address this problem and make a noticeable impact, it makes sense to identify all bounds-related problems and think about the most user-friendly way to resolve them.

1 Like

Those are good points. But I think before figuring out how to make all of this as low-effort as possible, we first have to give those projects that do care about their bounds (regardless of whether they’re paid/funded) the right tools to do so.

Those are likely developers that maintain boot libraries or other critical packages like aeson, servant etc., where there are enough stakeholders and volunteers that additional maintenance burden pays off.

We can then go from there and see if we can develop automation for those steps.

2 Likes

This can be simplified a lot by

  • not using backports, but forward-porting (i.e., apply changes to the oldest version that should contain the change), and
  • automation to actually handle the forward-porting between version branches.

I applied this approach at 2 companies, where supporting/maintaining (very) old versions of the software was necessary. See https://dl.acm.org/doi/10.1145/2993274.2993277 and https://web.archive.org/web/20170707112140/http://www.scality.com/blog/continuous-integration-faster-releases-high-quality/

Happy to discuss more, if interested!

2 Likes

I am now using a rather nice CI setup with cabal-plan-bounds for a library of mine, and documented it at length here:

Maybe it inspires other to use the same.

I wish developers would not have to write long workflow files to get a convenient setup like this, but it’s a start.

1 Like

Integration in haskell-ci with its Constraint-Set feature and, of course, Tested-With, would be really cool, and remove the need to have various ci-config/* files around (since these can be generated from Tested-With and Constraint-Set definitions as haskell-ci already does). This’d remove the need to manually write/maintain some GitHub Actions workflow.

I had the idea of setting tested-with with cabal-plan-source (so that it’s always correct), but using it to derive the test matrix is a good idea, too, thanks!

I did not now about the constraint-set feature, and can’t find documentation for it. Can you help me?

Integration of that idea with haskell-ci would be amazing! I use haskell-ci whenever I can (although I found it limiting whenever I need to do something slightly non-standard in my build scripts – the old problem of convenient framework vs. composable building blocks I think.)

Not sure how I learned about it, likely by reading sources. In one project, I have the following in cabal.haskell-ci:

Constraint-Set unix-2.7
  Constraints: unix ^>=2.7
  Tests:       True
  Run-Tests:   True

Constraint-Set unix-2.8
  Constraints: unix ^>=2.8
  Tests:       True
  Run-Tests:   True

Constraint-Set optparse-applicative-0.16
  Constraints: optparse-applicative ^>=0.16.1.0
  Ghc: ^>=9.4

Constraint-Set optparse-applicative-0.17
  Constraints: optparse-applicative ^>=0.17
  Ghc: ^>=9.4

Given this, in all jobs (for every GHC version), it’ll build/test the package with unix ^>=2.7 and unix ^>=2.8, as well as with optparse-applicative ^>=0.16.1.0 and optparse-applicative ^>=0.17 in the GHC ^>=9.4 build(s).

In my current setup, this ensures tests are executed with every (combination of) PVP major.major versions of dependencies I claim to support, akin to your ci-configs.

Very nice! I think you should be able to easily copy the second half of my suggestes workflow (collect build results and validate/update bounds) and combine it with yours.

Personally, I see some benefits to the directory-of-files approach, in particular that you can activate the configuration locally easily (just pass it to cabal using --project-file).

While I sympathize with the issue that this is trying to solve, I, as a nixpkgs maintainer, am quite afraid, that wide range adoption of this tool will actually make the ecosystem worse. The probability of finding a working build plan for different libraries (of which one might not have a very recent release, which is totally fine and very common) will just get harder when they constantly bump lower bounds. I think this might affect every cabal user who uses two libraries which don’t depend on each other, so yeah, probably every cabal user, and the burden on stackage and nixpkgs maintenance would increase a lot.

I really like the approach of trying to have wide ranges of bounds and test it with --prefer-oldest and --prefer-latest.

My prediction would be that widespread adoption of this scheme would decrease the ratio of non-broken Haskell packages in nixpkgs significantly.

I hope not!

I’d also like to test my packages against the package set as defined by nixpkgs (stable and unstable), I just need to figure out the best way of doing that (nixpkgs doesn’t by chance already generate a cabal config like stackage does)?

In addition, I think it goes well with this approach to have some jobs using --prefer-oldest or some other way to keep the range that’s tested large.

Sure works, Stash · NicolasT/landlock-hs@d7ba147 · GitHub

cabal-plan-bounds formats build-depends different than how cabal-fmt does it, though, hence several lines in the diff are not actual changes.

I am so excited for --prefer-oldest, i’ve been wanting that for a long time to verify lower bounds!! thank you!!

2 Likes

The problem with people testing against nixpkgs is, ironically, that it doesn’t help nixpkgs maintenance at all. We are kind of victims of our own success, when that happens. Projects which test (only) against current nixpkgs very often break in nixpkgs when we update nixpkgs, because they never had an incentive to fix any build errors with newer packages. They will only start testing against the new versions of their dependency, when we release a new version of nixpkgs, but at that point we already needed the fix.

That’s more about upper bounds, though, where my fear with cabal-plan-bounds is more about lower bounds. So it might not be that bad. Actually testing against the newest versions available on hackage, stackage-nightly and stackage-lts will probably bring us a long way.

1 Like

It probably wouldn’t be hard to create a nix derivation which generates a cabal config file like that.

Having CI which tries to figure out the loosest possible lower bounds and then autogenerating the bounds seems fine fore me. But then the bounds should really be convex.

IMHO, this is a case where downstream is pushing a problem they might have upstream.

The intent of something like cabal-plan-bounds is to ensure a package author who wishes to do so, can publish a package with (lower & upper) bounds set to something that’s known to work. Validated by CI or otherwise (manually), which in any case has a cost (CI may be somewhat-free’ish nowadays, but still), hence, the package author might decide to bump lower bounds because keeping them low (and tested) is costly when new upper bounds need to be supported.

Even if there’s no technical reason to bump said lower bounds, i.e., the package may still work with older versions of the dependency.

If downstream (a distribution, a company,…) wants to ship/use the package with older versions of dependencies, its on to them to validate things work properly, and apply bounds relaxations as appropriate (e.g., by applying a vendor patch during the package build process, or maybe using allow-older definitions in cabal.project).

Nice!

You can pass multiple cabal files to the tool in one invocation, then the plans are parsed only once.

Can you just run cabal-fmt after it?

:man_facepalming: Of course.

That’s what I meant! Add nixpkgs as an additional settings you use, to keep that lower bound alive and tested. It’s not perfect yet (maybe someone creates a tool that calculates a set of “full range covering build plans”…) but might work most of the time.