This isn’t (yet) idiomatic Haskell, but you can get all benefits of map
that you want, without needing it to be a separate function; use type applications and write fmap @[]
or fmap @(Either _)
. It also works to distinguish which part of the structure you’re working on; fmap @[] . fmap @Maybe
Last time this discussion came up someone mentioned GHC’s style guide says to prefer the monomorphic version:
As a rule of thumb, when writing monomorphic code, prefer using a monomorphic function over a typeclass-overloaded one.
- A monomorphic function communicates information to the programmer, e.g. when reading
map f wibbles
one learns thatwibbles
is a list, as opposed to readingfmap f wibbles
where one needs to run a mental typechecker to figure out the type ofwibbles
and then infer thatfmap
uses the instance for lists.- Monomorphic functions lead to easier-to-understand type errors.
- It is easier to refactor code using monomorphic functions, as one can grep/search for a certain function name. Searching for
mapBag
is much, much more helpful than searching forfmap
.- Overloaded functions rely on typeclass specialisation to be performant, which isn’t always reliable and might require additional complexity, in the form of manual
SPECIALISE
annotations andRULES
pragmas.- Using an overloaded function may allow the program to continue to compile after a refactor that changes the type of the argument – but the new behaviour may be flat-out wrong. As Kerckhove puts it: type-class polymorphism erodes type safety.
Although I feel like some of these points are also arguments against polymorphism and type inference in general. And most of these points could be solved by better tooling (HLS already solves the first point).
Take my opinion with a grain of salt, but I think having map and fmap being separate is great for people new to the language.
A beginner might have a harder time reasoning about a functor error compared to a lists error when they misuse a given function (:
Evidence from educators on the earlier discussion of “monomorphism is better for new users” pointed to the contrary.
Sure, there is nothing hard in understanding that length :: _Wtf t => _wtf -> Int
returns an Int and work on a list. After all, most people manage to program without type signature at all, so ignoring the type signatures altogether works, especially at a beginner level, when tasks are basically LISP exercises (which have been solved for decades without the need of types).
That doesn’t mean that later on, people don’t get bitten later by cryptic error messages. Again, error messages can be ignored. I myself rarely read error messages (but only look at where the error happen).
Yeah, absolutely. I’m just surprised by the evidence in this thread that this is not especially harmful to new users.
I myself rarely read error messages (but only look at where the error happen).
Me too! And I’ve heard that from a few others too. It’s an interesting phenomenon.
Imagine that you are a new user who has just started to learn Haskell. You’ve written a few simple functions that work with numbers, and now you’re about to start working with lists. Do you think you’d find Foldable
to be inherently more confusing than Eq
, Ord
, Num
, Fractional
, Integral
and RealFrac
, which you’ve already encountered (and had hand-waved away). Introductory Haskell, as it is commonly taught tends to be a confusing mess of relying heavily on type classes, yet not initially teaching them, and “one more” doesn’t noticeably increase that burden.
I have also taught Haskell to beginners several times, and I have to agree with the other experience reports that typeclass-polymorphic functions are usually not a big problem. The approach I took when teaching was to introduce type applications early on. We would then use ghci to first type in :t fmap
which yields Functor f => (a -> b) -> f a -> f b
. We would then look at :t fmap @Maybe
and :t fmap @[]
to inspect the type signatures (a -> b) -> Maybe a -> Maybe b
and (a -> b) -> [a] -> [b]
. In my experience this works pretty well.
One of the big problems I repeatedly saw with beginners is punning: The fact that we use both [5]
and [Int]
and [a]
and [x]
, and similarly for tuples, is a major stumbling block. In order to properly tell them apart you have to be quite fluent in Haskell so that you can always distinguish contexts where a type and contexts where a term is expected.
One of the big problems I repeatedly saw with beginners is punning: The fact that we use both
[5]
and[Int]
and[a]
and[x]
, and similarly for tuples, is a major stumbling block.
-
For lists you could try using the constructors directly:
5:[]
,x:[]
, introducing the more convenient syntax when students are writing out lists of length 5…or'H':'e':'l':'l':'o':',':' ':'R':'e':'a':'l':'W':'o':'r':'l':'d':'!':' ':' ':':':'-':'D':[]
. -
For tuples…perhaps an “it could be worse” approach is the least worst option: start off with the formal product-style notation and wait for the problems with that mathematical punning to appear, then switch to Haskell’s syntax.
I’m sure you’re pumped to hear that the proposal to introduce -XNoListTuplePuns
has been accepted.
Of course because the industry users don’t want “all of their programs to stop compiling” these pragmas won’t be on by default, but who knows, they might get included in a future -XGHC####
pragma .
Hmm. Strange. I had no problem whatsoever. Perhaps my advantage was I was self-taught, didn’t have a lecturer trying to confuse me.
Indeed what annoys me about Lists is that as soon as I want to iterate over them, I can’t use [ , ]
but have to switch to (x: xs)
.
If I’d encountered :: List a
back then, I’d want to know what this List
thing is, whereas :: [Int]
is plain on its face – as is :: (Int, String)
. I’m not convinced there’s anything much wrong with punning-in-general data T a = T a
– as compared with huge amounts of Dependent Haskell that is entirely baffling and not at all like Math :: forall a. forall b -> ...
– where’d the .
go?; that’s entirely too subtle a syntactic way to signal a type not a term goes here.
An approach that can combine some of the advantages of both is to make it so that your datatypes all use the same names for equivalent functions, and then import them qualified; now your refactoring doesn’t have to change all the individual function names, it just needs to change the qualified import. E.g., suppose you have a module that uses Data.HashMap
as its dictionary-shaped data type; so you would go import qualified Data.HashMap as HashMap
, and possibly import Data.HashMap (HashMap)
; and you would then use functions like HashMap.lookup
, HashMap.insert
, etc. But now you realize that you’re putting untrusted inputs into your hashmap, making you susceptible to HashDOS attacks, so you decide to switch to Map
. Well, now all you need to do is change your import, and go s/HashMap/Map/g
on your source file, and you’re basically done, because HashMap
and Map
use (roughly) the same APIs.
I’ve never done this, but you could take it a step further and name your qualified imports by the role your data type plays in the module, and even import the same module multiple times to serve different roles. E.g., you could do this at the top of your source file:
import qualified Data.HashMap as Dict
type Dict = Data.HashMap.HashMap
-- and then, for example:
let myDict :: Dict String Int = Dict.singleton "hello" 1
myDict' = Dict.insert "world" 23 myDict
print $ Dict.lookup "world" myDict'
And now if you want to switch to Map
, you only have to change the import and the type alias.
In any case, I don’t think the refactoring argument is very strong; a much stronger argument is that when you write functions in terms of fmap
, they can accept a much wider range of data types than if you had written them in terms of map
. E.g., let’s say you want a function that takes a list of Maybe
s and replaces Nothing
s with a default value, like so:
setDefaults :: a -> [Maybe a] -> [a]
setDefaults def = map setDefault
where
setDefault Nothing = def
setDefault (Just x) = x
Now that function will only work on lists. But if we change map
to fmap
, we can make it work on any functor, and it costs us literally no extra effort (except for a slightly longer type signature, and one extra character in the implementation):
setDefaults :: Functor f => a -> f (Maybe a) -> f a
setDefaults def = fmap setDefault
where
setDefault Nothing = def
setDefault (Just x) = x
In other words; when the type is known anyway, using a more general function doesn’t buy you an awful lot, but if you write code that gets reused in such a way that your choice for a more general function propagates into a more general type, then that means your code becomes more useful, because it can cater for a wider range of data types, without sacrificing soundness.
You can take it even further by just using Backpack. I think it’s a bit unfortunate that Backpack works on the package level (I have some ideas about changing that), but otherwise it seems like a much more principled approach.
That is basically (ab?)using type classes to do OOP interface. This is fine for general libraries, using functions on “unknown” data structures and combine them into new usefull function (still working on “unknown” data structure). For example writing concatMap
in term of <>
.
Abstracting over known types for future refactoring benefits is another story …
However you can try the containers from mono-foldable package. It is part of Yesod ecosystem and a such is very good. It creates “container” typeclasses that you can lookup
.
I use it a lot, my main big project being a Yesod app. In practice it’s a pain, error messages are verbose, most of the time unnecessary. You end up having to add type signatures everywhere (or type application) and/or exporting original container package qualified, hiding things from export (because all name clashes) etc … (Basically name problem than FTP ).
I encourag you to give it a try.
Of course, if I could write code which compiles first time, I wouldn’t have any of those errors/problems.
I think we’ve been talking of “beginners” as an homogeneous group, but in practice it is far from being the case.
- There are the ones who want to learn Haskell and will become proficient in it. They are willing to put the time and the effort to understand things (and will eventually manage). This is probably how most of us have been at some point.
- There are the curious one, which want to give a try and will give up at the first hurdle (if it’s not tooling, it will be Monad …).
- And there are the student which have no intention whatsoever to learn Haskell, but have to (and will forget as soon as they can).
- …
So when we want to beginner friendly are we thinking of making people from category #3 life easier (so they can pass their exams and forget Haskell sooner) , or “converting” more #2s to #1 ?
This is a No true Scotsman fallacy. We have evidence from three different people above, yet you dismiss it because their students are not True™ Beginners. Of course they are not, every True™ Beginner has to experience difficulties with Foldable
in Prelude
.
> all (error "have difficulty with Foldable in Prelude") []
True
The FTP was not about introducing the Foldable and Traversable classes or about adding some controversial instances to them. FTP was about exporting those classes by default from Prelude. For some versions of GHC and base this even meant, that Data.List exported the Foldable methods instead of functions restricted to the List type.
The other controversial addition was defining instances for Foldable on tuples. This together with FTP became really bad, because since then every Haskell programmer, both beginner and experts, don’t get a type error anymore, if they accidentally write length ('a','b')
. Without FTP you had to write Fold.length ('a',b')
to get hit by this problem.
Before FTP it was seen as a no-go to touch Prelude. Starting with FTP it became quite usual, which is bad. I always try to write my libraries to be compatible with a wide range of GHC and base versions. One way to achieve this is to minimize the use of GHC extensions. However, this does not save me from changes to base
and even more Prelude. I appreciate some changes like making Semigroup a superclass of Monoid and Applicative a superclass of Monad, and I am happy to find ways to program in a way that works before and after those changes. But I am unhappy about having to invest precious time into changes that only assist programmers that are too lazy to write import statements (or let the IDE do this) and who do not care about former and future versions of GHC and base.
As a consequence I have given up on writing warning-free code across GHC versions. I define one main GHC version for me and try to make my code warning free in this version. This is a real drawback since warnings can spot programming errors early, but not if they are lost in a series of import warnings caused by FTP and similar proposals.
If you are happy with <$>
then you can also define your own map
as alias to fmap
. You could use this at any call-site, but not in an instance definition.
I was wondering what people said about the Foldable instance for Tuple at the time so I tracked down the commit. The discussion links are dead.
Looking at the libraries mailing list archive from January 2011 it seems the discussion is here: Proposal: Add missing Foldable/Traversable instances for Prelude types
It’s not very interesting. The main point of discussion was if it should be lazy or not.