Convenience in the Haskell ecosystem

Yes, that’s not really the question.

The main question is where. Once we’ve figured out what kind of overall user experience we want, we have to figure out which abstraction is the right layer for that and what APIs the various individual tools need to provide in order to allow for that.

This is where stack took the shortcut (imo) and tried to be the layer for everything: tool installation, fixed package sets, building haskell code, nix integration, docker integration, …

It is not an easy problem.

3 Likes

I’m not too familiar with this tool; how is it different to, say, Stack templates?

One of the main differences, I suppose, is that Stack templates create … Stack projects :smiley: whereas I think these days it would be extremely convenient to have a templating setup that would create projects using different build systems/installers; i.e. ghcup, cabal, stack, various nix-y options, …

I definitely agree with @puppo that if we have something as prolific as create-react-app that would allow the community to define and maintain arbitrary Haskell-based projects to start from, that could be extremely powerful and convenient.

Honestly, in my view it’s probably an interesting and valuable-enough project that it’s worth a proposal to the Haskell foundation.

2 Likes

Agreed. I think it’s the best and easiest idea we’ve converged on in this conversation.

(In fact, I already started writing one earlier today, though I’ve realised I’m unsure what design would be best.)

3 Likes

The format used for Stack templates is based on the syntax of the Mustache tool and is, in fact, Stack-agnositic.

At the moment stack new my-project my-template, effectively, applies the template and then runs stack init. If stack new had a flag --no-init, which would be very simple to add, it could be used to create entirely Stack-free projects from templates.

4 Likes

I’m aware. It’s even been split out as project-template, which is very helpful!

2 Likes

In my early involvement with Haskell, somebody explained to me that the Haskell Tool Stack (Stack) did not follow the ‘Unix philosophy’, which I understood then to mean ‘a tool should do only one thing, and do it well’. Stack is indeed like a Swiss Army knife. The challenge is to somehow make sure that introductory users can find the main blade without needing to know (or be confused by) the fact that somewhere, tucked away in the handle, there is also a tool that will remove stones from a horse’s shoe.

8 Likes

Exactly.

But that doesn’t mean that you can’t have high-level tools that are unix philosophy. Those tools are glue abstractions over existing tools: I consider VSCode haskell extension to be fairly unix philosophy adhering. It isn’t a lot of code and its objective is to make things “just work”. It uses HLS and GHCup and some minor custom logic to achieve that.

This is on editor level… the editor is the closest we have to “end user experience” and we can make some rough assumptions about what end user experience is expected (e.g. vim vs emacs vs VSCode).

The other angle is: sane defaults.

Those are the difficult decisions: which abstraction layer should provide what… and what are sane defaults.

We have to decouple the tools from the user interface… this allows us to build different user interfaces based on the same set of core tools. And this is only possible with the unix philosophy. Otherwise we end up re-implementing and rewriting, although we just wanted a slightly different UX.

2 Likes

summoner exists but i believe has gone unmaintained. i understand people thought it was pretty handy, and it could be a good base to start from, or at least inspiration.

3 Likes

We had haskeleton too which I understand has been merged into stack and I’m pretty sure there have been a few other ones. All of them got unmaintained or forgotten for some reasons. It might be intersting to know those reasons before creating a new one. My guesses are

  • discoverability: you can’t use a tool if you are kot aware of its existence (that is the limit of the unix phylosophy)
  • difficulty to find the correct template
  • the correct template rarely exists (everybody has different needs)
  • it’s not worth the effort. It takes longer to find the template vs rolling it yourself.

An easy alternative would be to create a cabal file with the common packages commented, so the user just has to uncomment the desired packages.

Also to guaranatee that the cabal file matches version of ghc and HLS maybe that should be a job for ghcup (rather than cabal). Ghcup generates you a cabal file for the version you selected.

2 Likes

GHCup is an installer, not a cabal file templating mechanism.

This is exactly why we need unix philosophy: tool boundaries shouldn’t be violated, just because an existing tool is popular.

A unix compatible solution to this is already proposed:

4 Likes

Just curious, how much would have Cabal scripts / headers have helped in your use case? I’ve never used HLS with cabal headers, so I’m not sure if HLS would have been useful for your case.

Indeed, this requires writing an exact-printing parser, which is difficult. As it happens, I believe @Liamzy is currently working on one for Cabal, but I’m not sure of its current status.

Stalled, I sort of wish I never got myself caught up in NeoHaskell drama.


Also, there is yet another solution to the present problems.

One thing I’ve suggested a thousand times is just to have wrapper libraries around standard Haskell libraries; i.e, quite a few libraries aren’t newbie friendly, but it doesn’t require writing new code.

All you need to do is to wrap existing libraries, have somewhat sensible defaults, and there you go.

For your problem as well, it seems as though the reasonable way to deal with having to figure out what libraries you need would simply be a bunch of wrapper libraries on Hackage; i.e, you have a task, you want the task to be simple, the wrapper libraries reexport the dependencies you want and come with suggested workflows.

If you don’t like the wrapper libraries or don’t like their defaults, just don’t use them and import their underlying dependencies instead. The wrapper libraries are just there for convenience.


This strategy easily gets around high maintenance requirements, if that’s what killed Haskell Platform. Instead of aiming to be all-in-one convenience, it’s some-in-one; i.e, the wrappers would be targeted to only specific use cases. If one wrapper dies, it’s not the same as the entirety of Haskell Platform dying.

if you give it a try, you’ll find vscode + haskell works wonderfully well.

Not always (try it in many kinds of project, eg multi-package ones; or on a resource-constrained machine) and not all the time (try using it heavily over an extended period). I would temper that claim a little because it is highly frustrating for users who trust that it’s true and then have a different experience [again].

7 Likes

sorry for the stack team

Why, maxigit ? stack has been stable from first release to the present, IME.

3 Likes

My mistatke. I didn’t mean that stack wasn’t stable but I am not sure of its long term future.

If you look at the contributors graph, you can see that Micheal Snoyman worked on it for about 5 years. Ok @mpilgrem has taken over, it might be for 5 years too. Then what ?

Also there is a possibilyt that in 5 years time Cabal or a new competitor will make Stack obsolete.
Don’t get me wrong, I’m a Stack user and I think the Stack team is doing a fantastic job !

1 Like

I think this is an important point. The only reason people use Stack templates is because they’re right there as soon as you look at stack

These are also important, which is why I suggested a slightly different approach above: to have a small number of highly curated and largely unopinionated templates, so that it’s immediately obvious which one to use. (‘Hmm, I’m doing a quick-and-dirty experiment, so let me select batteries-included-script’… that sort of thing.)

I disagree with this — it’s barely different to the current situation. It might help beginners with discoverability of packages, but little else.

I agree with this.

In response to @hasufell’s criticism of this point: I see it as a GHCup issue precisely because GHCup is an installer, and responsible for provisioning HLS. That being said, GHCup does provide an API, so it doesn’t necessarily need to be the responsibility of ghcup-the-tool… a templating program could easily call into the API instead.


On further reflection… I think the templates can be split up into several more or less orthogonal directions:

  • Basic info: name, date, license, etc.
  • Form: single Cabal script vs entire Cabal project
    • If the latter, its components: library and/or executable and/or test suite
  • Available packages: either base alone, or a selection of boot libraries, or a broader selection of popular libraries
  • The code: either as minimal as possible, or with some common imports, or with a sample program

Conceptually, this seems reasonably easy to integrate into the interactive cabal init prompt sequence. It also doesn’t seem too difficult to implement, since it doesn’t require parsing and extracting any template files — it should mostly be string concatenation. What does everyone else think of this idea?

1 Like

If we’re brainstorming around this problem area, I have a question: why does Haskell have packages?

‘Unit of code distribution and versioning,’ I know, sure. But Stackage package sets get pretty close to users not having to care about the package layer as far as versioning and distribution are concerned. There’s still a step where the user manually affirms that they want their project to depend on a package named whatever, but then the problem of figuring out what version of that package and how to get it is happily abstracted away. And the problem of figuring out what packages are needed based on modules imported is also automatable. And rather than have all of this tooling to paper over the existence of the package boundary, what if Haskell just didn’t have it?

As a user, at the level of the module, all I want to specify is which other modules this module depends on. At the level of the project, what I really want to specify is which entities I trust (in terms of both security and quality) to provide me with the libraries I use, and a choice of a vetted package set. What if library vendors published their packages to a Stackage-like service that kept package name and version as metadata attached to modules (in order to look up documentation or bug reports or what have you), but did not retain any concept of ‘package X depends on package Y’, and served modules back individually on demand in association with an entire package set instead of a particular package and version? And what if the build tool used that service to fetch any module imported on demand as long as that version was authored or endorsed by a trusted entity (such as when said entity pushed that module as part of publishing their package)?

Then the default Cabal template would just be, use latest stable package set, trust the Haskell Foundation, and nobody would need to care about whether they remembered to include containers. You can write a Yesod application by saying that you additionally trust Yesod Web, and that gets you whatever subset of yesod and shakespeare and persistent that you actually use. It’s the batteries-included solution but lazy about actually shipping the batteries—what’s more Haskell than that?

3 Likes

This is a pretty big thing though :grin:

1 Like

I think it was a bigger thing before package sets, is what I’m saying. (Probably still a big thing for anyone who has some sort of unconventional or throwback library sourcing model, I suppose, which is why I don’t really expect this idea to go anywhere.)

1 Like

I am not sure I understand what you are suggesting. Package sets are still made from packages, right? so you do need packages to start with.

1 Like

This should be fairly easy. ghcup list allows machine parsable format and has a lot of options.

The VSCode haskell extension uses that too: https://github.com/haskell/vscode-haskell/blob/f22634252b4918be044baf027247548da23d92c5/src/hlsBinaries.ts#L629


However. The larger problem is the compatibility between GHC and HLS. There are two things that GHCup does for the end user here:

  • it ensures
    • that the recommended GHC works with the recommended HLS
    • that the latest GHC works with the latest HLS
    • that the recommended GHC works with the latest HLS
  • informs you about incompatible GHC+HLS set versions
    • prints a warning to stdout when you trigger such combination via ghcup set
    • shows hls-powered for the currently set HLS in ghcup list and ghcup tui

As such, you can spawn a “virtual env” with a working set via ghcup run --ghc recommended --hls latest --install -- sh.

Once you have that, all you need is the base version (which you can inquire via ghcup list too). And then you lock that base version in your cabal file.


What would be interesting is if there’s a better interface for saying “install the latest HLS that’s compatible with GHC X.Y.Z”. This is implemented in VSCode Haskell extension in fact! It utilizes the following metadata for that: https://github.com/haskell/ghcup-metadata/blob/develop/hls-metadata-0.0.1.json. GHCup itself doesn’t use that file, currently for two reasons:

  • it’s an additional download (GHCup has been designed to be both very quick and up to date, so download times and even yaml parsing times are relevant)
  • I haven’t figured out a good generic design yet… it seems this might be a feature that should live in the TUI, with some interactive mechanisms, where an additional download in the background isn’t a problem anyway

Some related issues:


The third option that has been mentioned is to ship HLS with GHC, but that has large implications, including for the GHC devs and their resources. I can see some benefits, but I’m not sold yet.

4 Likes