I have been developing a very small spreadsheet DSL on and off for months.
When it comes to handling errors, initially I chose the vanilla Maybe
route.
But… if this is a spreadsheet DSL, I thought I’d rather not pull fmap
, >>=
etc. in. Too much machinery for such a simple domain.
So after a lot of thinking and finagling. I thought I’d use refinement types and the refined library (which is nice, but I will probably post up some questions about soon!).
So now I can define functions which require more specific arguments, e.g. non zero denominator for a div
function, etc.
And if you try to divide something by zero, you get a compile time error from the input, rather than a run time error upon output.
I am OK assuming / ensuring all input happens before compile time (inputs in your spreadsheet) so runtime and compile time are somehow analogous in this case.
But I wonder, what advantage have I actually achieved? It seems I have come almost full circle, from div by zero
runtime exception, to a argument should not be zero
compile time error.
Perhaps I have somehow stumbled upon the Curry-Howard isomorphism…
Intuitively, I prefer compile time checking, but I am finding it hard to fully describe what it is I have achieved by implementing it!! : )