Deprecating Safe Haskell, or heavily investing in it?

Same here, I thought Safe Haskell was sort of “we trust the mantainer of this package” :sweat_smile:

1 Like

You make this claim and don’t back it up with anything other than “it can have bugs”.

My point is that sandboxes are particularly prone to hard-to-find bugs. More so than a compiler. Not only can the sandbox itself be buggy, but the whole structure of the setup tends to put bugs from the layer outside the sandbox onto the attack surface.

The point is to have different abstractions.

If you want to put something like Safe Haskell inside a sandbox, thus getting multiple layers of protection[^1], and if you have the resources to do that, then that’s an absolutely fine idea[^2].

But claiming that “sandboxes solve the problem, so Safe Haskell isn’t needed” isn’t that. It’s just abandoning one protection in favor of another. If you actively limit yourself to a single layer, how can you claim to be getting the benefit of more than one? And if you are limited to only one layer, either by choice or by circumstance, then, all other things being equal, you want to keep the more understandable one.

If you trust haskell, the RTS and your binary, why even sandbox? Or use SafeHaskell?

Because I don’t axiomatically “trust my binary”. I’m trying to make my binary more trustworthy, and I’d like to have language features to do that. I’m actively running code that I do not consider trustworthy, or at least code in which I have relatively limited confidence. I want to limit what it can do, and I want to be able to observe what it does do.

If I pull in something from Hackage, even if I give the code a once-over, I still can’t be really confident that it’s not deliberately trying to cause trouble. To be honest, most projects don’t even do the “once over” step. They’re not even necessarily as wrong as you’d think to skip it, either, because trying to actually find intentional misbehavior with code review is a fraught project, even if you put in a lot of review time. And nonetheless you’re basically forced to pull in a lot of code.

No, the point is we don’t trust it. And language level features to gain that trust are at the utterly wrong level. Unless we start to be serious and have a verified compiler etc

You also don’t have a verified sandbox, a verified kernel, a verified hypervisor, verified SMM code, the rest of a verified BIOS, verified AMT/PSP/whatever, verified expansion card firmware, or a verified CPU. Most of those are in your TCB no matter what you do. Why do you feel you can rely on them if you can’t rely on an unverified compiler?

I’m not sure where you got this from. If we’re talking about user-space API, kernel developers consider it a bug when user-space API breaks and lose their sleep over it.

I think there’s a deep misunderstanding here. I’m not sure exactly where.

It’s not that kernel developers willingly break existing user-mode code. It’s that they’re always extending what the API can do. If your sandbox has decided to allow ioctl on sockets of type foobar, and the kernel developers add a dangerous ioctl, they’re not breaking the API from their point of view… but they are breaking your sandbox.

… and the whole interface with the kernel is fractally complicated, with danger able to pop up anywhere. For example, did you know that in Linux, you can pass one of your open FDs to another process over a UNIX domain socket using sendmsg? That feature is decades old and inherited from BSD. It’s a good feature; you could use it to build a capability system on top of Linux. But you can also use it to break a sandbox.

If you did know about that example, you’re unusual; most people don’t. I have definitely known many people who have built sandboxes who haven’t known about it. And it’s just one example among an unknown but large number.

You move on to suggest VMs instead. I guess those are bugfree and less complicated?

No, they are not bug free (as I said). They are also complicated (as I said). However, yes, they are in fact a lot simpler than the Linux kernel.

The TCB is only one concern; the other question is how much of the TCB’s surface is exposed to attack. I accept that the TCB containing a VM is larger than the TCB containing a program under bubblewrap, because your average hypervisor is larger than bubblewrap, and probably larger than the kernel code that actually implements most of what bubblewrap does. But, by the most complete definition, the TCB for both of them includes all of the host kernel, plus all the ancillary stuff I mentioned above. That stuff is a lot bigger than either of them, so the ratio of the TCB sizes isn’t really enormous. No matter what, you have a giant TCB. But the VM’s attack surface is meaningfully smaller and simpler.

[^1] Layers are good, but they do have a pitfall in real projects. It’s easy for both programmers and project managers to fall into thinking that, because there are two or more layers, they don’t have to assure that any of them actually work. Nobody explicitly says that, or even explicitly thinks it, but there’s always a tendency to lose your healthy fear of cutting corners, because “the other layer will take up the slack”. That’s exacerbated because your resources are spread over the layers. You can end up with every layer being so full of holes that it’s not hard to escape the whole thing. It takes discipline and good project governance to avoid that, and those can be in short supply.

[^2] I’m kind of allergic to calling it “different abstractions” rather than “layers of protection”. One of the reasons sandboxes tend to be bug-prone is that the set of operations they have to pass judgement on isn’t usually designed around any particular abstractions at all, which means that the number of regular, understandable things you can rely on when deciding what to enforce is relatively limited.

2 Likes

Without the sarcasm:

Condensed version:

  1. Since it’s already in GHC, let’s first try to remediate Safe Haskell.

  2. Failing that, try recycling it to make a replacement that everyone finds mutually-tolerable.

  3. If neither is possible, then remove Safe Haskell.

They’re listed in the associated paper.

Explaining why I mentioned them is probably best left for another (long!) thread…

1 Like

Telling other people what to do does not really work in open source. I am yet to see a proponent of Safe Haskell who is ready to get hands dirty. It’s all talk “do not remove, we should fix it”, where “we” is always someone else.

Yes, it would be nice to have Safe Haskell for some definition of “safe”. Except that no one has figured out which exactly. We know multiple reasons why Safe Haskell failed, but we don’t know what can be remediated to make it useful.

If anyone is interested to work on fixing / improving / remediating Safe Haskell and has an idea what needs to be done, please step forward. If there is no one, let’s stop paying ongoing costs for this misfeature.

5 Likes

…it isn’t a decree, only a suggestion - mine have been ignored in the past, and no doubt will be ignored in the future. And it seems (at least from the majority of comments here) that:

  • Safe Haskell is more hindering than helpful,

  • there’s no interest in finding another way to checking if unsafe features or extensions are being used in a Haskell module (other than e.g. using grep),

  • and there won’t be any adverse consequences in the long term as a result of removing Safe Haskell.

That is all rather optimistic, at least to me. But maybe I’m just too old and cynical…

Any evidence for this? I don’t really agree. There’ve been lots of very subtle but dangerous bugs in GHC, such as broken maths on M1 in recent releases.

I don’t think there’s any use in making claims about number of bugs beyond considering the TCB. The rest is subjective, unless we have data.

Not at all. Because SafeHaskell doesn’t give any protection whatsoever. I’ve pointed out multiple times in this thread what the research standard on secure programming is (as in: language level security). SafeHaskell is not even near it.

Security layers don’t work this way: if you can’t trust code generation, you can’t trust the type system either.

If you don’t trust your kernel, you can’t trust anything else running on your OS.

This can be solved by a sufficiently sophisticated effects system that enforces no system calls can happen outside of defined effects.

However, this is not a proper security layer either with an untrustworthy compiler.

Excellent point. You can have levels of security without verification.

But my stance is that if I do security inside the same system (especially language), then there’s no way to make this interesting other than going formal verification route.

E.g. you don’t trust bubblewrap. I could say, well… let me rewrite it in Haskell utilizing SafeHaskell. Would that catch your attention? Probably not! And rightly so.

You want to know the binary behaves correctly, not that someone used fancy language features.

And bubblewrap, although not verified, works at exactly that layer: the outermost external layer, making no assumptions, having no knowledge what Haskell is.

Your anecdotes about sandbox bugs are interesting and I certainly did not know about them.


I could put it another way: there’s a reason some companies don’t let programmers write test cases. Subconscious bias, assumptions, etc. The only way to solve this is to either have proof the tests are good and exhaustive or to separate these concerns strictly.

1 Like

I’ve used SafeHaskell for one of my libraries. My hope was that it would provide some guarantee to consumers of the library that there were no use of IO or unsafePerformIO, etc. I don’t think Hackage really advertises the fact well, so it was hard for SafeHaskell to take off. I would like something that is better, that would have well defined guarantees, but so much of the ecosystem is unsafe or uses Template Haskell it might not be worth while, because it stops compilation with an error. It also required buy-in from all of the authors of the code in Hackage to make it work. What if the Template Haskell using libraries actually generated code that would be considered safe? What if the Rules were safe for the types they were applied too? It seems like SafeHaskell was to heavy handed and opt-in to work.

The information I would really like to know would be a report for each package on Hackage that would be generated automatically, when uploaded, that could describe the “dangerous features” for each package and its dependencies. Perhaps with a distinction between Template Haskell that generates “SafeHaskell” vs. one that generates a lot of “unsafe…”.

I think that having a report of things considered unsafe, linked to in Hackage, would be better than SafeHaskell, because it would be more discoverable and less opt-in then SafeHaskell.

3 Likes

I love that we’re having this discussion at all. It feels incredibly healthy for the community to reflect on what has or has not worked and find paths forward.

Safe Haskell seems like it has always been about getting more guarantees out of code. In particular, I would find it quite valuable to be able to effectively isolate usages of partial functions, calls to error/unsafePerformIO and such, including which dependencies make use of those. It is worth noting that Safe Haskell does not meet that goal today.

In addition to the already discussed weight on the GHC team, Safe Haskell is often confusing to users which I think is clearly shown by multiple of the comments here so far.

So would seem that Safe Haskell is a net negative for the community as a whole. So ending that pain would be a great thing.

I think we should follow the 3 release policy of deprecation and then removal, which would require changes to code. Because if we do not, then have we really taken the pain away? Everyone will still have the code to deal with it, users will still see it in extensions list, etc. As part of this, I think a great idea would be to write some tooling to automate the removal of the relevant code. Not sure but perhaps we could leverage retrie to smooth that transition.

You misunderstand: I’m proposing that the safe haskell language pragma become no ops as the form of deprecation. rather than marking old code not compile

1 Like

I have always wanted Haskell, and GHC in particular, to serve as a laboratory for trying out ideas, and seeing which ones flourish and which do not. Ideas always start with particular, passionate individuals, and Safe Haskell is no exception: the driving forces were David Terei and David Mazieres at Stanford.

Successful ideas outlive their creators; they attract others who maintain, refine, and develop them. GADTs is an example of a wildly successful idea; GADTs are now deeply embedded in GHC. Template Haskell is another – Tim Sheard was the original inspiration, but TH is no longer dependent on Tim, while lots and lots of libraries are dependent on TH.

Linear Haskell is at an earlier stage. It still has its original enthusiasts deeply engaged (Arnaud, Krzysztof, and others), and we have yet to see how broadly it is used. But I love that GHC can be a petri dish for an experiment like this.

Safe Haskell is arguably an idea that hasn’t taken root. It remains a pretty good idea. As the paper says “Haskell comes tantalisingly close to providing a means for confining malicious code”. But somehow Safe Haskell hasn’t caught fire, gained users, grown enthusiasts who love and develop it. So, without any implied criticism of David and David (and I was a co-author on the original paper too) I think it would be plausible to lay it to rest.

Incidentally, while thinking about the petri dish, Backpack is another experiment whose outcome is (to me) uncertain. Backpack is a “big feature”; it fills a really important hole in Haskell’s story, namely large scale modularity. And yet it hasn’t attracted either the enthusisats or the real-world usage that I was hoping for. I would love for someone, or a group, to take up the mantle of being the Backpack enthusiasts and maintainers.

16 Likes

Thanks for bringing some clarity into this. Just a side comment…

I think we need to approach things much more from the end user POV.

Backpack is great, but it has always been a primarily technically based discussion for the advanced. Many people don’t even know what it’s really for.

We need to market and demonstrate use cases in a digestible way for regular users, so they will go “oh, that’s neat, I want it”. This will also allow us to reason about the proposed in terms of usability and how it fits into workflows, which I think has largely been neglected.

2 Likes

At my work, we have strongly wanted Backpack on more than one occasion. Our resistance has always come down to ease-of-use and HLS integration.

The same worries for ease-of-use haunt us with linear types, which we have actually tried to use, but the cost of use was slightly too high (known bugs such as linearity inference at the time through let bindings and case statements).

I’m not sure if these concerns are totally real, but I’ve seen the bug reports on HLS, and it’s enough to make me feel uncomfortable to make my company to rely on Backpack.

I figure many people who ever used OCaml modules and functors would like to have such functionality in Haskell. I do, for one. I actually played around with Backpack a while ago for a very suitable use-case, but ran into too many showstoppers (one of them being Stack not supporting it) to pursue further. See xinU, a Haskell Experiment » nicolast.be

2 Likes

Incidentally, while thinking about the petri dish, Backpack is another experiment whose outcome is (to me) uncertain. Backpack is a “big feature”; it fills a really important hole in Haskell’s story, namely large scale modularity. And yet it hasn’t attracted either the enthusisats or the real-world usage that I was hoping for. I would love for someone, or a group, to take up the mantle of being the Backpack enthusiasts and maintainers.

Backpack is very important for the incremental compilation, the best version of splitting base, and making Hackage type safe — by which I mean version solving is sound and complete and we don’t need to spend O(2^n) compute resources verifying this. So I remain confident its time will come!

Let’s discuss Backpack in a separate thread please.

6 Likes

In my opinion, almost everyone does that, or should at least think of it that way, since it’s wrong to see trust as a binary of either full trust or no trust whatsoever. Every time I use a library from hackage, I’m running somewhat untrusted code. I’m fairly confident that it’s not malicious, but I have far from 100% confidence in that, at least unless I do a thorough inspection of the code, but not even then.

So for me, the most value Safe Haskell could give is to decrease the level of trust needed when using third party libraries, at least in the cases where those doesn’t need to have IO access, and thus decreasing the attack surface for supply chain attacks.

1 Like

One of the big practical challenges, in my experience, is confining Trustworthy code to small modules that can be audited easily.

1 Like

…and now that it’s been coughed regurgitated brought up yet again in the context of language editions:


Deprecating Safe Haskell

So what was the intended purpose of Safe Haskell?

So if it really is time for Safe Haskell and its ecosystem to be dismantled, such “loophole” language features (old and new) as well as their associated ecosystems should also be included in said dismantling to avoid regressing (in this case) all the way back to Haskell 2010:


Renovating Safe Haskell

Haskell is (meant to be) a declarative language. So a module written in ordinary, boring un-extended Haskell ought to be safe by default - there ought to be no need at all to explicitly label that module as being Safe. Instead, any module which uses unsafe language features should be be labelled “unsafe” in some way.

Since unsafe has already be reserved for a technical purpose in the FFI, and based on this post, a future Safe Haskell could use feign as an annotation for both unsafe modules:

module feign MotleyBits (...) where

and their imports:

import feign MotleyBits (...)

Note:

  • there should be no way to “override” the absence of either annotation if Safe Haskell is enabled - if users don’t want to see the resulting warnings (or errors if need be) they must add the annotations.

  • there should be no transitivity - if module A imports from module B, and B imports from an unsafe module C, then only B requires the annotated (feign) imports.

  • If there was some way to “deem” the likes of runIO and the various PerformIOs as being foreign to Haskell, reusing the existing reserved word foreign could be an alternative to introducing feign.

So instead of everyone else having to label their modules as “safe”, the few who use unsafe Haskell features in the sources for their modules would need to label those modules or their imports accordingly. This places the focus on unsafe modules and how to have less (or none) of them.


1 Like

You have to define what you mean by “safe” and this definition must be meaningfully useful.

In my mind compiling a regex expression with libpcre2 and having it return a ForeignPtr Pcre2_code is a safe pure operation, provided I write the code correctly and don’t leave any escape hatches that make the whole thing crash and burn. The compiler will never be able to approve that because it can’t read the documentation for that library.

If your definition of safe is a hard “everything the compiler can approve and not an inch more”, then you will have to solve this topic’s #14, either banning recursion or implementing very complex algorithms that make definitions like let x = x in x impossible.

I would argue the question is the inverse: how much of the ecosystem actively relies on Safe Haskell? Based on the two replies to this comment the answer is very close to none. Moreover the only use real use case seems to be trust, which is something this extension can never solve.

The intention is reuse Safe Haskell’s existing notions of safety (and one reason why the section that quote is from has the name Renovating Safe Haskell ).


Apparently enough to contemplate its replacement - if merely removed, the more pertinent question is:


…and no matter how many extensions are added to GHC’s type system, there will always be some other type it cannot infer or check. But I don’t see angry mobs of Haskellers clamouring for it to be dismantled just because types extensions can never solve that problem.

Typed programming is a “best effort” solution, and so would trusted programming. But for those who like “zero effort” solutions:

# mkdir ./Main
# cat /dev/urandom > ./Main/Main  # ...Ctrl-C!
# chmod u+x .Main/Main
# ./Main/Main

(after backing up and isolating your system from the Internet ;-)