I guess you can still run hpack manualy.
I put a Makefile in my projects and just type make build
, etc. Then the make rule calls cabal or stack or cargo or npm or ā¦
That way my muscle memory does the same thing, regardless of build system.
And when I use cabal, I have the rule run hpack
before calling cabal.
Fwiw, cmake in C++ also encourages if not forces you to list individual file manuallyā¦ people seem often asking for file glob feature.
Perhaps itās the reason why some sadistic people like me not super bothered by how cabal works by default :).
I consider this one of the ways cabal is still harder to use.
Attempting to follow my own advice, I looked to see if there was an issue about this. There are a few things bouncing around I was able to find. Most importantly for this whole thread, however, is Feature parity with Stack Ā· Issue #8605 Ā· haskell/cabal Ā· GitHub
Since Cabal (cabal-install) removed sandboxes I switched to Stack and with some global settings it keeps a sanity at Haskell development, especially if you (have to) care about disk space usage.
Sane GHC-devel setup:
- GHCup for compiler & toolchain installation
- use that GHCupās version implicitly with
system-ghc
global option - prevent installation of other GHC versions with
install-ghc
set tofalse
- usually also does help to loose upper bounds for compiler and dependencies checks with
allow-newer
andcompiler-check
options - use
stack-clean-old
tool to remove snapshot artifacts to keep global storage really lean - optionally set resolver to
nightly
for global projects if using latest GHC version
If I really need older compiler & libs or limit dependencies versions, I do it explicitly per-project. This procedure makes less headaches then pollution prone cabal-install. Now the experience feels on par even with Rustās Cargo.
I wouldnāt recommend cabal-store-gc
as it falls flat in some cases, which leaves cabal store and packages index in invalid state. Happen to me reproducibly with xmonad and xmonad-contrib installs. Even author himself discourages it from normal use as unreliable.
stack-clean-old does handle Stackās root, beside others. And seems to be reliable in the long run.
Something that I donāt see mentioned but is my #1 reason for switching from Stack to Cabal is that HLS support for executables/test suites is incomparably better in Cabal compared to Stack - https://github.com/haskell/haskell-language-server/issues/366
Thereās a better option:
Thatās good to know. However I use nix integration so stack is installing ghc through nix anyawy. Is there a way to get ghcup use nix as well ?
Both the HLS project and the Stack project are keen that HLS support Stack and that Stack can output the information that HLS needs to support Stack (so that HLS does not depend on āhacksā). There is an open issue on Stackās repository in that regard. From my perspective (Stackās), what I am missing to help - and need to chase - is a precise specification of the information HLS needs that Stack can provide and in what format. I think HLS needs what is ultimately passed to GHC, but Stack does not know that directly - as Stack builds using Cabal (the library), not directly with GHC.
You can invoke anything from within the stack install hook, including nix. The script has to print the location of the ghc binary to stdout.
Regarding Nix integration, you probably mean this thread
If you, or anybody else, finds a good reason not to deprecate it (it seems almost all Nix+cabal users say it only confuses newcomers and leads them away from good solutions) then please let cabal developers know. Iām not a Nix user, so I canāt tell if itās āidenticalā to the Nix+stack integration or not, but any comparison and cross-pollination would be very welcome, too.
You may know this, but
- stack-clean-old is useful for cleaning stack-installed tools and libs
-
ghcup tui
is useful for cleaning ghcup-installed tools -
ncdu
is useful for exploring these and other disk hogs in more detail
In the limit, I donāt think thereās any difference in disk usage between stack and cabal - youāll use disk according to the number of GHC versions you need for the projects youāre currently working on. But I guess stack users will more easily accumulate GHC versions if theyāre not being careful.
I already mentioned in that thread that I was using stack nix integration. Nobody seems to react.
In one way, I understand, so it is not cabal related, on the other way it might stop me to move back to Cabal (Not that I canāt replicate how I am using stack+nix, but more than donāt have the time nor the energy (trying to do anything with nix when you forgot how it work is exhausting).
I thought that Cabal was sharing some objects that stack.
As I understand well stack recompile every thing under each directory, including external packages (unless they are in the pre-built snapshot (Iām problay wrong there)).
So if you checkout the same project twice (like git worktree) and change some code, stack will recompile everything but cabal wonāt. I am right ?
I may have misunderstood your point about ālike git worktreeā and Stack rebuilding, but for local/mutable packages of a project, Stack puts the build artefacts of Cabal (the library) in its .stack-work
working directory in the project directory. Most people add that working directory to .gitignore
. I donāt think Stack builds unnecessarily.
I didnāt explain well. When I mean like git worktree
I mean checking multiple branches of the same project. Basically I work on one project A
(directory A
) and I need to create a branch to work on a long feature (or I am in a middle of a feature and I need to correct a bug on the main branch etc ā¦) so I checkout the project again in directory A-my-branch
(or if I use git worktree, under A/my-branch
. This is mean I have now two .stack-work
directory (one in A
and one in A/my-branch
which are nearly identical, yet take the double amount of disk space.
Given that on my project (not sure why, I think it is the doc) a .stack-work
is usually about 2 or 3 Gb, having a few branches becomes quickly an issue (when you only have 7Gb left on your hard-drive).
Maybe there is a way to share a .stack-work
between two directories.
I understand that Cabal does it naturally by having a global repository.