Any evidence for this? I don’t really agree. There’ve been lots of very subtle but dangerous bugs in GHC, such as broken maths on M1 in recent releases.
I don’t think there’s any use in making claims about number of bugs beyond considering the TCB. The rest is subjective, unless we have data.
Not at all. Because SafeHaskell doesn’t give any protection whatsoever. I’ve pointed out multiple times in this thread what the research standard on secure programming is (as in: language level security). SafeHaskell is not even near it.
Security layers don’t work this way: if you can’t trust code generation, you can’t trust the type system either.
If you don’t trust your kernel, you can’t trust anything else running on your OS.
This can be solved by a sufficiently sophisticated effects system that enforces no system calls can happen outside of defined effects.
However, this is not a proper security layer either with an untrustworthy compiler.
Excellent point. You can have levels of security without verification.
But my stance is that if I do security inside the same system (especially language), then there’s no way to make this interesting other than going formal verification route.
E.g. you don’t trust bubblewrap. I could say, well… let me rewrite it in Haskell utilizing SafeHaskell. Would that catch your attention? Probably not! And rightly so.
You want to know the binary behaves correctly, not that someone used fancy language features.
And bubblewrap, although not verified, works at exactly that layer: the outermost external layer, making no assumptions, having no knowledge what Haskell is.
Your anecdotes about sandbox bugs are interesting and I certainly did not know about them.
I could put it another way: there’s a reason some companies don’t let programmers write test cases. Subconscious bias, assumptions, etc. The only way to solve this is to either have proof the tests are good and exhaustive or to separate these concerns strictly.