It’s fine enough for a prototype, IMHO, since string-interpolate
already depends on it and people seem to like and use that.
Num
is widely acknowledged as a poor design, and probably shouldn’t be taken as licence to ship confusing things. I think that a more accurate comparison would be Foldable
. It makes things much better for veterans but more confusing for newbies. (I say this as a course tutor for a first-year/freshman Haskell course, where handwaving typeclasses before students had mastered recursion made things harder for the weaker students. The Foldable
/Traversable
Proposal (FTP) was before my time, but IMHO we should have made the default a bit more newbie-friendly while still making it easier for advanced users to pick up the power. This might have looked like keeping the monomorphic list-consuming functions in Data.List
(and re-exporting these through Prelude
), and providing polymorphic ones in Data.Foldable
for alternate preludes to set up.
It seems to me that it would be very valuable to allow newbies to opt into a form of string interpolation that’s going to fail comprehensibly in the common use cases. I lead a team of Haskellers at work, and part of that is helping decide very consciously how far we’re going to go with our Haskell. While we’re not a “simple Haskell” shop by any means (we use lens
, there are a few GADTs around, etc.), we definitely have to consider the teachability of our chosen dialect and make sure that each approved extension carries its weight.
If I can’t explain to a newbie an error message the comes from a reasonable-looking misuse of StringInterpolation
, then I can’t teach it to newbies nor expect them to use it without at least an initial handle on all the features upon which it depends. We see this problem with lens
: before people really get around it and understand how to read and respond to the type errors it throws up, it is mysterious and frustrating to use. I think string interpolation needs to be simple enough to not have this property.
Is this the real crux of your objection to shipping a prototype library to explore the design space for interpolations? If so, I recommend pausing this process and putting up a GHC proposal for exposing its parser in a way that’s going to be convenient for TH use. A convenient way parse snippets of Haskell source into an ExpQ
/DecQ
/PatQ
or whatever would be fantastic.
IME, the more bells and whistles I add to a design in the name of “future extensibility” and “potential for innovation”, the more I find that I’ve overdesigned the thing I’m trying to build. Then I end up having to trim back the design to extend it in the ways I actually need, or to make the design comprehensible to others. If you’re shipping a new extension to GHC, you won’t have that luxury and it will stick around approximately forever. I want your proposal to succeed, by which I mean “is enabled by many users” not “is landed in GHC and released”. I think the best way to do that is to seriously study the use cases that it’s built for, and make sure that new Haskellers can pick it up and run with it. The survey has but a couple of short examples but I think it’s worth eunmerating more and seeing how they behave under the different proposed schemes. These use cases should, IMHO, include failures: lexical failures where the interpolations are malformed, failures where the interpolations refer to the wrong things, failures where the result type of the string is ambiguous, failures where the extension is disabled (will it just report “variable not in scope: s”?), etc.
Good string interpolation has the potential to be a great QoL improvement, and I commend you for taking it on. But it also has the potential to be a great newbie-confuser, and I would really rather that didn’t happen.
Also, why is the interpolation character s
? I could understand i
or f
, but s
surprises me.