Deprecating Safe Haskell, or heavily investing in it?

No, they shouldn’t be the end of it. But it begs a similar question: beyond parametric encapsulation and lazy evaluation, has anything else been discovered that has have a similar impact to either of them?

As Bodigrim and yourself have both noted, more and more unsafe code is appearing (with various levels of “trust-washing”), none of which seems to suggest a new commonality or abstraction which can be used to manage the mayhem the way parametric encapsulation and lazy evaluation helps us to do. Instead, it’s just a ever-growing pile of “unsafe abominations” which makes the claim of Haskell being a purely functional programming language an ever-growing joke…

…so let’s solve both problems at once:

  • Haskell is relabelled the “nondeterministic functional programming language” - the unsafe prefix can then be replaced with nondet.

  • As I suggested here, unrecoverable errors in Haskell no longer stop programs, but sends them into an infinite regression: debuggers can then be attached to the still-running programs to examine their call stacks or other runtime meta-data as needed to discover what went wrong.

The research and development that went into HasFuse, an early Haskell implementation based on GHC, could be useful for this task.