Deprecating Safe Haskell, or heavily investing in it?

See also the thread: [Haskell-cafe] Safe Haskell?

I personally think safe Haskell could be the thing that sets apart Haskell from other languages. But I indeed think it doesn’t live up to that as it is now.

This confuses me, it seems to me that 2 requires 1 (otherwise I can just unsafeCoerce unrestricted IO) and Safe Haskell doesn’t directly provide 2 AFAIK.


Yes indeed, Safe Haskell provides the foundations to do 2, but we can do it in another way (like with nsjail).

See 6.18. Safe Haskell — Glasgow Haskell Compiler 9.4.4 User's Guide on Restricted IO Monads

Yes indeed, Safe Haskell provides the foundations to do 2, but we can do it in another way (like with nsjail ).

So essentially outside Haskell? If so, I think you could reword the proposal a bit to make that clearer.

It’s not a proposal, it’s a discussion. :slight_smile:
The proposal will appear on the GHC Proposals repository.

The idea that we can’t ever use safe coercions (coerce, DerivingVia, GeneralizedNewtypeDeriving) in Safe code because there could be some old Trustworthy-marked (or otherwise sensitive) module somewhere that fails to include a necessary role annotation is simply obscene.


…but will they still be as joyful when multithreaded GHC is the default, certain unsafe features/extensions are behaving badly as a result, and they’re hunting for those new bugs?

A more-measured process seems the better option here:

  1. Implement all the approved language and implementation proposals which can interact badly with unsafe features or extensions;

  2. Using the experience gained in step 1 and elsewhere, attempt to improve Safe Haskell;

  3. If Safe Haskell cannot be salvaged, use the experience from both steps 1 and 2 to devise a satisfactory replacement for the vast majority of Haskell users.

  4. If that replacement cannot be found (e.g. in a reasonable time period) only then should Safe Haskell be deprecated, leaving us with the joys of hunting for obscure bugs, dealing with fragile tests, etc - fun and games for everyone!

If only someone could figure out a way to reduce (or even vanquish!) the need for all that “unsafeness” - it would reduce (or vanquish) the need for such a process to begin with.


What does the threaded run-time have to do with Safe Haskell? There’s Trustworthy code all over the ecosystem, including base, that causes every bit as much trouble as “unsafe” application code.

Fully in favor of deprecation of Safe Haskell. The fact is that Safe ecosystem failed to materialise. Instead of it there are lies, big lies and Trustworthy slapped over every piece of unsafe abomination you can imagine.


The question is why did a Safe ecosystem fail to materialize, though? It is because of deep and fundamental design flaws, or is it because of a few key fixable hurdles, e.g. poor communication about how it was meant to be used? The package guidelines don’t say a single word about Safe Haskell. It’s hard for me to clearly see how people not using a feature they’ve never been asked to use is a condemnation of the feature. Safe Haskell doesn’t buy you anything unless you have a system that runs untrusted code. Most people don’t do that. So we learn next to nothing from the obvious result: volunteers who have no personal need for safety metadata, and have never been asked for it, don’t maintain it.

I must ask because I still don’t understand - why is coerce unsafe? Since it requires the constructor to be in scope, does it actually let you do anything that you couldn’t do without it? This is essential to the discussion, IMO, of whether Safe Haskell can be redeemed. It’s a huge obstacle right now that so many forms of deriving cannot be done in a Safe module, and personally it is the only reason why so many of my own packages ignore safe haskell.


Safe Haskell does not allow you to run untrusted code: partial functions, error and undefined are “safe”, infinite recursion is “safe”, exhausting system resources and deadlocking is “safe”, and any function which returns IO can “safely” launch missiles. The notion of safety employed by Safe Haskell is very narrow and not helpful in general. In my books this counts as a fundamental design flaw.


@Bodigrim If I take an untrusted module that defines a value of type exerciseSolution :: Seq Text and compile it into a safe program that asserts that the student’s exercise solution is correct, with a timeout to take care of infinite recursion, what could possibly go wrong security-wise?

IO is irrelevant. You would never run IO from an untrusted module.

I personally have never understood what Safe does, and as such would not even know what I’d be missing if it were deprecated. So yeah, unless someone has a good reason to keep it, I’d also vote for deprecation if it hinders GHC developers. :woman_shrugging:


Please correct me if I’m wrong: The demo on the front page of utterly depends on Safe Haskell, does it not?

I think section 5 of the Safe Haskell paper gives a pretty good outline of what the imagined use cases originally were., which is still in use today, is one of them. It is aknowledged from the very beginning that untrusted effectful code would need to be defined in a custom limited monad that was then interpreted by trusted code, not in IO directly.

For instance, the untrusted module can allocate a petabyte of memory and crash your system.

I find this quite alarming. Can you offer a demonstration? I would have expected the OOM killer to protect the system and limit the scope of such a crash to only the process. I have certainly written many a memory-hog program by accident, and never experienced a system crash as a result. Perhaps a malicious user could do more damage, though. I’m anxious to learn how.

Fundamentally, running untrusted code is a solved problem: every CI service out there has figured it out. Does not make a sense to reinvent an ivory wheel, which can spin only on roads made of fairy dust.

It could be just enough memory for your program itself, but every other application which tries to allocate will be OOM killed.


Deprecate it. Just make sure old stable code doesn’t need a re release because of the language extension no longer being valid. There’s a lot of stable mature libraries out there which would break otherwise.

You do raise a good point: lambda bot and similar tools like Chris smiths code world do make use of the white listing of code

Instead of relying on Safe Haskell it could have limited users to a white list of available imports. Same applies to lambdabot etc.

But that’s Linux only? He’s t do we do outside of Linux? On windows, macOS, BSD, …?

I’m not sure this is completely true. For proper isolation CI services like GitHub Actions do isolation at the operations system isolation level (they spin up a new virtualized OS instance). GitHub (shared) runners, gitlab shared runners, buildkite runners, … are all pretty terrible at isolation (outside of Linux).

On topic though; I think the idea of Safe is good ergonomics are not; nor is adoption. This should be rethought.

I wish we had some way to force levels of purity. Something that allowed in true type signature the guarantee that no IO was used (e.g. via unsafePerformIO, no ffi calls, …) and in a separate step proof that the function did not use any system dependent values either (e.g. anything unsized/implicitly sized).