That’s good to know. However I use nix integration so stack is installing ghc through nix anyawy. Is there a way to get ghcup use nix as well ?
Both the HLS project and the Stack project are keen that HLS support Stack and that Stack can output the information that HLS needs to support Stack (so that HLS does not depend on ‘hacks’). There is an open issue on Stack’s repository in that regard. From my perspective (Stack’s), what I am missing to help - and need to chase - is a precise specification of the information HLS needs that Stack can provide and in what format. I think HLS needs what is ultimately passed to GHC, but Stack does not know that directly - as Stack builds using Cabal (the library), not directly with GHC.
You can invoke anything from within the stack install hook, including nix. The script has to print the location of the ghc binary to stdout.
Updo will give you both Cabal and Stack if using projects (
Regarding Nix integration, you probably mean this thread
If you, or anybody else, finds a good reason not to deprecate it (it seems almost all Nix+cabal users say it only confuses newcomers and leads them away from good solutions) then please let cabal developers know. I’m not a Nix user, so I can’t tell if it’s “identical” to the Nix+stack integration or not, but any comparison and cross-pollination would be very welcome, too.
You may know this, but
- stack-clean-old is useful for cleaning stack-installed tools and libs
ghcup tuiis useful for cleaning ghcup-installed tools
ncduis useful for exploring these and other disk hogs in more detail
In the limit, I don’t think there’s any difference in disk usage between stack and cabal - you’ll use disk according to the number of GHC versions you need for the projects you’re currently working on. But I guess stack users will more easily accumulate GHC versions if they’re not being careful.
I already mentioned in that thread that I was using stack nix integration. Nobody seems to react.
In one way, I understand, so it is not cabal related, on the other way it might stop me to move back to Cabal (Not that I can’t replicate how I am using stack+nix, but more than don’t have the time nor the energy (trying to do anything with nix when you forgot how it work is exhausting).
I thought that Cabal was sharing some objects that stack.
As I understand well stack recompile every thing under each directory, including external packages (unless they are in the pre-built snapshot (I’m problay wrong there)).
So if you checkout the same project twice (like git worktree) and change some code, stack will recompile everything but cabal won’t. I am right ?
I may have misunderstood your point about ‘like git worktree’ and Stack rebuilding, but for local/mutable packages of a project, Stack puts the build artefacts of Cabal (the library) in its
.stack-work working directory in the project directory. Most people add that working directory to
.gitignore. I don’t think Stack builds unnecessarily.
I didn’t explain well. When I mean like
git worktree I mean checking multiple branches of the same project. Basically I work on one project
A) and I need to create a branch to work on a long feature (or I am in a middle of a feature and I need to correct a bug on the main branch etc …) so I checkout the project again in directory
A-my-branch (or if I use git worktree, under
A/my-branch. This is mean I have now two
.stack-work directory (one in
A and one in
A/my-branch which are nearly identical, yet take the double amount of disk space.
Given that on my project (not sure why, I think it is the doc) a
.stack-work is usually about 2 or 3 Gb, having a few branches becomes quickly an issue (when you only have 7Gb left on your hard-drive).
Maybe there is a way to share a
.stack-work between two directories.
I understand that Cabal does it naturally by having a global repository.
I see. Currently, Stack’s working directory for mutable/local packages is located relative to a project’s root directory. So, if you have, essentially, two separate projects (even if there is a lot of overlap in their code), Stack will have a separate working directory for each one. Stack only has a global store for packages considered to be immutable.
EDIT: There are Stack issues discussing this: Support non-relative STACK_WORK directory · Issue #6191 · commercialhaskell/stack · GitHub (where @hasufell offered a workaround) and Allow setting the working-dir to absolute directories · Issue #6135 · commercialhaskell/stack · GitHub (the main issue for its discussion).
I guess that package which are given with a specific version (git repo + commit) in the
stack.yml are considered “mutable” .
Also the doc of immutable package seems to be per project.
(I checked and when I said 2 or 3 Gb it is actuall 6Gb per project).
Doesnt cabal have the dist or dist-newstyle directories? Isn’t stack-work storing the same as those directories?
I don’t know. I don’t use cabal directly (or is stack using cabal under the hood ?)
I think that if a package is specified as a local filepath to a project directory, it is considered ‘mutable’. If a package is specified as a local filepath to an archive file (one option under
extra-deps:), it is considered ‘immutable’. That is based on Build overview - The Haskell Tool Stack (warning: not wholly reliable).
I’m not sure that is correct, about the documentation of an ‘immutable’ package - I’ll see if I can track down a definitive answer.
On subsequent questions: both Stack and Cabal (the tool) are built on top of Cabal (the library). In the case of Stack, the build artefacts of Cabal (the library) are put in a
dist\<hash> directory in
.stack-work; the directory reported by the command
stack path --dist-dir.
It seems that in my case most of the Gbs are taking by the doc (not sure why it is generated, probably me using
stack hoogle). The doc of immutable packages seem to be in
For me, something I really value about stack is the ability to swap out a package upstream for a version on my filesystem. (E: or a version on github rather than on hackage, or something.) I don’t have to do this very often, but when I do it’s incredibly helpful. It’s very easy with stack and I wouldn’t know how to do it with cabal. (And relatedly, at work I don’t know how to do it with my team’s nix+stack setup; in those situations I just drop myself out of nix temporarily.)
That is a good point.
That’s also easy with a cabal.project file, see:
All local packages are vendored, in the sense that if other packages (including external ones from Hackage) depend on a package with the name of a local package, the local package is preferentially used. For subdirectories to be considered local packages, the following setting can be used:
packages: ./*.cabal optional-packages: ./*/*.cabal
…then any package can be vendored simply by making a checkout in the top-level project directory, as might be seen in this hypothetical directory layout:
foo.cabal foo-helper/ # local package unix/ # vendored external package
Starting with Cabal 2.4, there is now a stanza source-repository-package for specifying packages from an external version control.
packages: . source-repository-package type: git location: https://github.com/hvr/HsYAML.git tag: e70cf0c171c9a586b62b3f75d72f1591e4e6aaa1 source-repository-package type: git location: https://github.com/well-typed/cborg tag: 3d274c14ca3077c3a081ba7ad57c5182da65c8c1 subdir: cborg source-repository-package type: git location: https://github.com/haskell/network.git tag: e76fdc753e660dfa615af6c8b6a2ad9ddf6afe70 post-checkout-command: autoreconf -i