I switched reluctantly to stack years ago. I understand that cabal improved lots since then and has some benefit (especially less disk space If I am right).
So is it worth switching back to cabal and if so how to ?
Some years ago I reluctantly switched back to Cabal for reasons I wonāt get into, lest I again summon the beast.
Since then Iāve seen it steadily improve in reliability and usability. But it still has a long way to go.
Switching is usually a matter of running hpack one last time, learning about the latest syntax of Cabal files to clean things up, learning about Cabal projects to recover a lot of the features I (for one) needed most from Stack, and just using cabal build
, cabal test
, and cabal repl
. Plenty of things are still easier to do with Stack, and if you run into one, I hope that there is an issue about it. If not, please open it.
The main reason I switched to Stack was the auto download/setup of GHC.
Now I am also using hpack and nix integration which if I understand well , the first one is not supported (automaticaly) and the second is becoming deprecated (if identical).
So Iām sort of stuck with Stack. On the other hand, I work on a few different projects (which can have different version/branch) and that is really disk space consuming. I understand Cabal will help with that.
Iām pretty conservative on package and ghc versions so I donāt really have any other reasons to switch back to Cabal.
Cabal also makes it really easy to just use any version of any library on Hackage.
I donāt think managing GHC with some other tool (ghcup, your OS package manager. one of the multiple Nix options) is all that bad. I just have whatever NixOS unstable uses and itās enough to just use cabal without anything else.
Iām probably the odd one out, as I like to use everything for different purposes.
I like Stack for general development. The snapshot is a quick way of knowing all of my deps are pinned, and any packages not in the snapshot have to explicitly be mentioned in extra-deps. You can write a cabal.freeze file to act like a snapshot, but cabal doesnt error if you use a package thats not there (and thus not pinned).
I also like hpack autofinding modules and expanding globs for extra-source-files (particularly useful for golden tests). I find the lack of this very difficult to work in Cabal. If Iām writing an application, Stack is my preferred tool.
But Cabal is great for testing dependency bounds for libraries. Itās also great for trying out different (+ unreleased) versions of GHC. So I use cabal in the CI of my libraries to test GHC compatibility and dependency bounds (using --prefer-oldest).
EDIT: Forgot to mention, I also use cabal to globally install executables, like hlint or fourmolu. Itās nice to install a specific version of the tool + cabal finds the right deps, whereas with Stack, you have to find the resolver that that version is in
That makes sense. I am using Stack mainly for development, main only reason to switch at the moment is disk usage.
I think it is easier to keep disk usage low with stack, because it should just use one version of each package as long as you use the same resolver in all your projects.
Cabal makes no guarantees about using the same package (unless you specifically ask for it) and it is pretty hard to remove build products that are no longer used. Usually the best you can do with cabal is just to delete the whole ~/.cabal/store/
(and say goodbye to all the executables you installed with cabal).
I am very glad I no longer have to worry about disk space since I bought a 1TB SSD.
How is reproducibility like for each of them?
I have been assuming that with these fixed:
- same OS platform
- same GHC version
- same Cabal version
- same cabal version (?)
- same cabal freeze file
- same code base
=> yield reproducible result.
Is it correct assumption, and how about in stack land?
(as a late joiner, I skipped the whole stack saga)
How much is the disk usage? On my personal machine .cabal
is 20 GB, although admittedly Iām not doing much development here. I just bought a 2 TB external drive for Ā£62.99. That comes out to less than $1 for my .cabal
directory. Unless your situation is very different somehow (a couple of orders of magnitude across combined dimensions) I donāt see how it can make sense to switch build tools for hard disk.
I forgat that Stack share dependencies, which is also a problem because stack clean
doesnāt clean unused dependencies (if I am correct).
My problem is more that in doesnāt share object between different version of the same project (git worktree).
My home/.stack
is 30Gb .stack
for my project : devel 20Gb (which seems lot) , prod 8Gb , my other project is 2Gb. Thatās about 60Gb for a SSD drive (I only have 20Gb, but I have had to do lots of cleaning for that).
Stack does emphasise āreproducibilityā as a key objective: see its goals at Contributor's guide - The Haskell Tool Stack.
Is it more reproductible that using cabal freeze
(if it still exists and do what I think it does).
stack clean
deletes build artefacts from a projectās .stack-work
working directory, which contains the artefacts for the local\mutable packages in the project - see clean command - The Haskell Tool Stack. There is no stack
command to delete snapshots relating to immutable dependencies. (If such snapshots get out of hand, I delete the snapshots
folder in the Stack root, and start afresh.)
EDIT: Could you elaborate on āStack doesnāt share object between different version of the same project (git worktree).ā? I have always added .stack-work/
to .gitignore
.
I canāt make a comparison, because I donāt use Cabal (the tool), but for an example of what Stack offers to help reproducibility see Snapshot and package location - The Haskell Tool Stack.
Oh yeah you can get many versions of the same library with Cabalā¦
But you can get many versions of the compiler installed without confirmation with Stack, and there wasnāt a means to uninstall them through Stack last time I used it.
GHC is huge so that bugs me more. I really appreciate GHCup letting me what GHC versions I have installed. I limit myself to 3 usually: never more than one per major version if I can help it. If you let use the latest resolver for a year and started new projects regularly enough youād have 8 versions by now.
The stack ls tools
command will list the tools (principally versions of GHC) that Stack has installed given specified snapshots (if you are using Stack to manage GHC, rather than GHCup) - see ls command - The Haskell Tool Stack.
If that is more versions of GHC than you want, you can delete unwanted ones (the directory and the *.installed
file) from Stackās programs directory (stack path --programs
).
If you do not want Stack to install versions of GHC āautomaticallyā, Stack can be be configured with the install-ghc
option - see Configuration (project and global) - The Haskell Tool Stack.
I recently started using Olegās cabal-store-gc
(https://github.com/phadej/cabal-extras/tree/master/cabal-store-gc) which helps a lot for this use case. It garbage collects the Cabal store, by default treating executables as GC roots, and you can tell it about projects whose (most recently configured) build configurations will also be GC roots. It would be nice to have a more automatic way for it to keep track of project GC roots rather than needing to remember to run it (once) for each project, but itās much better than rm -rf ~/.cabal/store
.
FWIW, when I first started testing the waters with Haskell a couple of years back, I started off with Stack (which I found to be kind of confusing at the time), and everything went ok. Iāve come back to Haskell in a more real capacity recently, and decided to just use plain ol cabal. IMO itās actually been quite easy to work with. I needed to learn a little bit about how cabal files work, but mostly itās been nice and I feel like its quite simple to use now
Switching is usually a matter of running hpack one last time,
How do you go about adding new modules that hpack would just discover in stack? Do you just manually update the deps in the cabal file?