Does cabal update force recompilation of everything?

Hi!

I have bumped a vty version from 5.38 to 5.39. Cabal rejected the change because my local hackage index was 100+ days old (before vty-5.39 was released). So I run cabal update && cabal build and suddenly I have to donwload and re-build every dependency of the project.

Both, vty-5.38 and vty-5.39 have the same dependency bounds, so I’d assume that cabal doesn’t need to mess with any transitive dependency, just donwload the vty and build it… Am I using using cabal wrong? does cabal build assumes you want the latest version over the already built version?

The reason my local hackage index was so old is precisely to avoid this situation which happend already a few months ago. Is there any cabal build --disk-space-isn't-that-f****-cheap flag? (yes, I am ranting… :angry: )

Thanks in advance! :slight_smile:

1 Like

Maybe the solver picked different versions due to the new index? If you used a freeze file, then you would’ve held everything constant when bumping vty.

4 Likes

It doesn’t force, but like @Ambrose says, it picks a different install plan. I believe its heuristic picks an install plan with later versions of dependencies, if possible, all the way down the dependency tree. This ends up meaning that almost everything has to be recompiled. I find this behaviour very annoying and I with the heuristic would pick an install plan to minimize recompiles.

2 Likes

Like @Ambrose I can only recommend freeze files to have rigorous granular control over transitive dependencies. My intuition would be that these transitive dependencies had new releases in the meantime, and cabal’s solver tries the most recent version first (that fits in the bounds that are declared).

2 Likes

Actually, maybe this is possible? I haven’t tried it, and I don’t know if it works, but if it does what is documented, then I think it might be a solution!

https://cabal.readthedocs.io/en/stable/cabal-project.html#cfg-field-preferences

One way to use preferences is to take a known working set of constraints (e.g., via cabal freeze) and record them as preferences. In this case, the solver will first attempt to use this configuration, and if this violates hard constraints, it will try to find the minimal number of upgrades to satisfy the hard constraints again.

Of course, for it to really work with the output of a freeze file, one would hope that there would be a preferences-file option.

1 Like

Yeah, Cabal really needs a --minimize-recompilation flag (with a snappier name, and arguably on by default). Come to think of it, this is very similar to what --offline does, and wasn’t --offline actually recently fixed for v2-build?

2 Likes

I don’t think cabal should have a flag like --minimize-recompilation. Even though it is annoying, the idea is that everyone gets the same install plan per platform.
Any steps to make this less declarative opens a can of worms.
I think it is one of the main features of cabal v2 commands that they work reliably the same for everyone.

The offline command, I hope so at least, should just error out if the install plan requires a download, and not influence the install plan.

2 Likes

--minimize-recompilation is perfectly declarative. It’s just a heuristic to choose a different install plan, in the same way that cabal.project.local can override the install plan.

EDIT: Oh, I guess it’s not declarative because it implicitly depends on the state of the user’s package store. But still, I think there are probably fairly easy ways we can restore the declarative nature, in the same way that we don’t object to freeze being non-declarative.

2 Likes

Given a freeze file, I can see a tool that recomputes dependencies to certain versions, modifying the freeze file. Without a freeze file, I don’t think this flag could be declarative since it would always depend either on your cabal store, or dist-newstyle.

However, I believe, such a tool could be nice to have, maybe as a cabal plugin? Break out cabal-install-solver, read some constraints and compute the minimal changes required to satisfy the new constraint.

1 Like

I think it would be declarative just to base the plan on the freeze file plus the Hackage index state. That doesn’t require knowing anything about the user’s store, but would allow a heuristic of “use the freeze file as far as possible and when not possible defer to the latest package per the Hackage index state”. This seems similar to what @chreekat was suggesting.

1 Like

Two major limitations of the freeze files approach for avoiding recompilation:

  • They don’t really make a lot of sense for libraries.
  • I might be wanting to contribute a small patch to a project of which I’m not a maintainer, and already have suitable dependency versions in my cache, from building other projects. Even if the project had a freeze file, its constraints would be unlikely to match what I have on my machine.

Maybe I should take a poll to see whether it’s just me, but I feel like --minimize-recompilation is the behaviour I want 99% of the time. If I already have versions cached which fit the declared bounds, why would I not just want to use those?

2 Likes

It’s what I want too, but I also take @fendor’s point that it’s desirable for build plans to be deterministic in as far as possible. The current situation seems to be that plans are determined purely by Hackage index state (and compiler version and platform). I wonder what’s the correct way of adding information so that we can minimize recompilation whilst remaining deterministic.

2 Likes

I think stackage snapshots are a good solution for this problem. That way you not only retroactively use versions that you have already built, but also proactively choose versions that are compatible with other packages that you are likely to install in the future.

3 Likes

This is a nice idea and may be the most straightforward way of achieving my desired goal. A Stackage snapshot is basically an ecosystem-wide freeze file. I’ll have to investigate how to make this convenient to use with cabal unless anyone knows already.

2 Likes

https://cabal.readthedocs.io/en/stable/nix-local-build.html#how-can-i-have-a-reproducible-set-of-versions-for-my-dependencies

3 Likes

Unfortunately, I think what you would like is optimizing for you, rather than your users. I think I would expect users to be, in general, more up to date. If Cabal prioritises a plan that suits you by using old dependencies, I think you risk the scenario where the bounds are subtly wrong, but it works for you.

Alright, so then the last test before making a new release would be to then update dependencies, ideally on a separate copy of the project. Changes in dependencies can be dealt with at that time, rather than being a continual distraction from regular developmental work.

I think it’s really just a preference. I personally would always prefer course-correcting as soon as possible, rather than right at the end.

That might work if the project only has a small number of dependencies. But it just doesn’t scale up - lots of dependencies means more “course-correcting” (boring!) and less developing (interesting, usually the rasion d’être for being involved in the project!)

1 Like

This is the reason I don’t run cabal update so often. You have to wait for both cabal and hls to fully build your project after it… It makes developing not fun… which is the main reason I programm in Haskell. Of course, cabal should not aim for fun but for stability and reproducibility, so I guess the correct behaviour is the current.

Nevertheless, I could imagine cabal build --develop to be a “just take the plan for whatever is downloaded or cached.” But now, we are approaching again to this other thread

1 Like