Hey all,
I wanted to capture some initial thoughts around pain points we have been observing in the past several months (years, perhaps) through our industrial use of Haskell (fairly extensively) at Well and Soostone. I’ll post some high level descriptions here, but can certainly elaborate further if helpful. As a disclaimer, I feel strongly only on the first couple of points - I’m capturing the rest here for discussion’s sake, but readily admit I couldn’t claim they’re the most important priorities at the level of the ecosystem.
Finally, please note some of these comments come from an environment where we champion the use of Haskell in competition with more mainstream alternatives like JavaScript/NodeJS/TypeScript/etc and where practical results are what drive/justify the use of Haskell.
Faster compilation
To set a bold target, we need to reduce our compile times to something like 10% of what it is
today. The amount of productivity lost to this one particular pain point is astronomical, to put it mildly. That said, any and all improvements would be most welcome, even if just a small percentage.
Perhaps a better way to say this is to make GHC compilation speed one of our primary, top-level objectives for Haskell adoption.
To illustrate the point further by offering a trade-off, I would much rather delay progress on developments in advanced typing features for the sake of compiler speedups or at least to ensure that they don’t cause further speed regressions. This is given, of course, that Haskell already provides for an amazingly expressive programming model unmatched elsewhere.
If helpful, here’s another mental model: When GHC compile times slow down by 10% for the benefit of a new advanced feature, industrial users immediately feel a multi-million dollar hit to their productivity/happiness/QoL in return for a modest offsetting gain but only in use cases where they benefit from the new advanced feature.
Lower memory usage
This is less important than compilation speed, but still worth mentioning. We’re currently unable to work on our 500+ module project on a 16GB RAM laptop. The immediately solution is of course to get larger machines, which is what we’re having to do, but it isn’t doing us any favors in arguments against, say, JavaScript.
Add to this HLS, ghcid, repl, etc. and you really need a heavyweight
machine to work effectively on a growing commercial codebase.
Mac support needs to get better
We’re still running into mach-o linker errors on Mac on large projects and having to
jump through hoops to avoid it. In some corporate setups (especially under regulation), developers are forced into a single platform (e.g. Mac) and ensuring proper “industrial” compatibility is pretty important.
An answer for security scanning / static analysis needs
Big corps are increasingly mandating that their vendors must use static
analysis tools for cybersecurity and other compliance reasons. We should think about improving our answer here.
A very clean, extensible Nix project setup/skeleton
I know this is a contentious point, as we have a multitude of preferences in our ecosystem, but I’m of the opinion that reducing time-to-prototype for a full-stack application setup would be a big win for increasing Haskell industrial adoption.
- Reduce energy needed to spin up a new project to 0
- Have clear path to transforming into a serious project
- Must be easily extensible to add non-Haskell dependencies; python,
databases, tensorflow, whatever.
A clear, easy path to modern DevOps style deployments
Manual management of VMs with SSH access is a non-starter in serious corp setups. They immediately fail compliance audits and a variety of needed certifications. Having an easy answer for productionalization via something like Terraform, containers, Gitlab runner style CI/CD deployments, etc, would go a long way so folks don’t have to reinvent the wheel themselves.