In retrospect, was burning bridges a mistake?

They’ve learnt so fast (because taught so well) that they never need ask questions on StackOverflow nor reddit; they don’t participate in discourse because only strugglers hang out there; they’re busy working using Haskell in industry, and find GHC has exactly the functionality they need, so never need ask for enhancements. What enhancements come along, it turns out the Steering Committee knew exactly what they wanted by mind meld.

And in their coffee breaks they program (in Haskell of course) space lasers (in orbit on spaceship USS Dependent Nirvana) to shoot down all those pigs floating across the horizon.

Or … they found the Intro to Haskell/FP module something of an out-of-body experience; the programming lessons passed from the notes of the lecturer to the notes of the student to the course exam papers without troubling the minds of either; they got their grades; they moved on without a backward look. To the extent they reflected on it at all, they thought it rather weird for a so-much talked-about language to not have a records system. They presumed that must get taught in the second semester module.

1 Like

To the extent they reflected on it at all, they thought it rather weird for a so-much talked-about language to not have a records system. They presumed that must get taught in the second semester module.

So if you really want to impress us…port that wondrous records system to GHC, so we can all see what we’ve been missing for so long! Considering just how often you’ve promoted it, it should be absolutely awesome!

1 Like

Many of our students do indeed think (functional programming/programming in Haskell) is difficult. They learn many concepts/ideas (e.g. recursion, algebraic data types, higher-order functions, type inference, equational reasoning, typeclasses, some IO/monadic programming) that are new/different to them. There is simply a lot to learn, and hence they think this is difficult. I can sympathize with that. (By the way: I never claimed FP/haskell was easy to teach: since there is a lot to learn, so there is a lot to cover in a relatively short time; that sometimes makes it challenging).

However, I don’t particularly see why they should find the notion of a Foldable typeclass so strange/difficult. If you can accept that there is a “Num” typeclass that expresses that there are types that look like numbers (and thus provides functions to work with such things) then is it really all that surprising that there is a “Foldable” typeclass that models “container-like types” with functions that allow you to combine/flatten/reduce the container into some value? The hard part is actually explaining the idea of a fold (i.e. a higher order function), not so much that you can define something like fold/foldr/foldMap for Lists as well as some other data structure.

Of course there are some weird instances for Foldable. We don’t explicitly spend time on those, and in all likelyhood our students will not try to use any of those instances themselves throughout the course (either since they don’t need them to to solve their homework questions, or since they don’t fully realize that they could have (attempted) to use these instances). Whether they will hit those issues later I don’t know, but at least I don’t think it is a “beginner” issue.

In all honesty; issues in terms of tooling (e.g. getting a working compiler, editor etc) are much more challenging to deal with as a teacher: if students have an frustrating start it is difficult to motivate them afterwards. Moreover, it is time consuming to help students to get something to work, since every time it is a different puzzle you have to solve to figure out what is wrong. Luckily things have improved quite a bit in the last few years, and I hope they will continue to do so in the future :).

11 Likes

Fair enough. Having said that I don’t think FTP is bad because of beginners, but just because it makes my life more difficult ;-).

1 Like

A precis of our lecture on the topic:

Mathematical division (over, say, the reals) is not a function, because it violates the property that every element of the domain (say, pairs of reals) maps to exactly one unambiguous value in the codomain.

To make division into a function, we can either exclude (x,0) from the domain, or make the codomain a union of the reals with a singleton of some sentinel value indicating failure (e.g. { ⊥ } ). The corresponding pattern in Haskell would be either to restrict the input (via a newtype) or add a sentinel value to the return value. The latter is more common, and Maybe is the standard vehicle.


I’m sure you know all of the above! But this is more or less the exact formulation that is used to explain it to students who have (at that point) less than 3 weeks of functional programming knowledge, and they are able to take this in their stride as well. This is off topic though, so I’d recommend we don’t go on about it :slight_smile:

2 Likes
newtype NonZero a = NonZero a

(/) :: Fractional a => a -> NonZero a -> a
(/) x y = ...

unitNonZero :: Num a => a -> NonZero a
unitNonZero x | x /= 0 = NonZero x

bindNonZero :: Num a => NonZero a -> (a -> NonZero b) -> NonZero b
bindNonZero (NonZero x) k = k x
divide :: Natural -> Natural -> Natural
divide x y | x < y = 0
           | otherwise = 1 + divide (x - y) y

divide x 0 = (\ y -> 1 + divide (x - y) y) 0  {- x >= y -}
           = 1 + divide (x - 0) 0)            {- application -}
           = 1 + divide x 0                   {- (-) identity -}
           = 1 + ⊥                            {- (divide x 0) regresses infinitely -}
           = ⊥                                {- (+) strict -}

…hence you either accept that division can fail:

(/) :: Fractional a => a -> a -> a

…or you make it annoyingly explicit that it doesn’t:

  • (/) :: Fractional a => a -> NonZero a -> a
    
  • (/) :: Fractional a => a -> a -> Maybe a
    

That would have been an interesting experience to rewrite everything in Haskell which relies on division to use the more explicit style for use in that course - the students must have been very impressed!


This is off topic though, so I’d recommend we don’t go on about it.

Agreed.

Well, I think the specific hard part is explaining how a function can take a function as argument. I had the advantage of learning about foldr before FTP, back when it was dedicated for Lists.

(And I appreciate my learning experience is anecdata, then for full disclosure: I’d dabbled in LISP many years before, so higher-order functions didn’t seem so bad. Nevertheless, Partial application made me feel a bit wobbly. At least I could get that secure in my mind before …)

This claim is a great surprise to me. The members of the Num class (at least the ones in the Prelude) are concrete, and the methods within Num are hugely familiar.

Constructor classes have a kind of ‘ghost in the machine’: the methods can’t mention anything about the type of the contents. (For example to say it must be in Ord for a BTree or Rose Tree as containers.) And this is a total annoyance – so much so there have been many proposals over the years to somehow smuggle a constraint on the contents inside Constructor classes/or at least acknowledge the contents will have some constraint.

Teaching the Foldable class as is today, you’re introducing three concepts all at the same time:

  • That you can abstract out the constructor vs the contents;
  • That you can pass functions as arguments to methods, to act on the contents of unknowable type;
  • That you can abstract out the constructor to some generic constructoralike of constrained but essentially unknowable type.

Yes, eventually a Haskell programmer has to understand all that together. I don’t see a learner should be expected to triple-jump before they can walk.

Anybody who thinks burning bridges was a mistake because of excessive overloading should spend some time programming in Elm. From my perspective the only mistake with burning bridges was that it stopped a bridge short. I’m continuously annoyed fmap and map have not been unified, to the extent that I overuse <$> so I don’t have to write fmap.

5 Likes

I’m continuously annoyed fmap and map have not been unified […]

It could have been worse:

In Lazy ML those functions are more simply named mapn (for n > 0, with map1 abbreviated to map).

1 Like

I developed the habit to pretend map does not exist, just write fmap everywhere. That extra character is not so much of a nuisance.

2 Likes

This probably outs me as a no-more-than-moderately-experienced Haskell dev, but I have not been able to form a consistent principle to apply on whether to prefer fmap or map—or in the context of pairs/Either, fmap or second.

In favor of map: it makes it clear that you believe you are working with a list, and if you are in the middle of a refactoring that, say, wraps the list in a Maybe, accidentally forgetting to change out map results in a compiler error closer to the place I want it to be. Also, when reading code that walks through some complex types, using monomorphic functions like map is more revealing of what part of the structure that code is working with than using a nest of identical fmaps.

In favor of fmap: if you are in the middle of a refactoring that replaces the list with a NonEmpty, fmap is one fewer change you need to make in the first place. Also, when writing code that walks through some complex types, you can often spam fmap and get something that does the right thing without having to think about it too hard.

I haven’t been able to pick a side and stick with it; I tend to go different ways on a case-by-case basis by intuition, and I always feel a little bad about it.

5 Likes

This isn’t (yet) idiomatic Haskell, but you can get all benefits of map that you want, without needing it to be a separate function; use type applications and write fmap @[] or fmap @(Either _). It also works to distinguish which part of the structure you’re working on; fmap @[] . fmap @Maybe

6 Likes

Last time this discussion came up someone mentioned GHC’s style guide says to prefer the monomorphic version:

As a rule of thumb, when writing monomorphic code, prefer using a monomorphic function over a typeclass-overloaded one.

  • A monomorphic function communicates information to the programmer, e.g. when reading map f wibbles one learns that wibbles is a list, as opposed to reading fmap f wibbles where one needs to run a mental typechecker to figure out the type of wibbles and then infer that fmap uses the instance for lists.
  • Monomorphic functions lead to easier-to-understand type errors.
  • It is easier to refactor code using monomorphic functions, as one can grep/search for a certain function name. Searching for mapBag is much, much more helpful than searching for fmap.
  • Overloaded functions rely on typeclass specialisation to be performant, which isn’t always reliable and might require additional complexity, in the form of manual SPECIALISE annotations and RULES pragmas.
  • Using an overloaded function may allow the program to continue to compile after a refactor that changes the type of the argument – but the new behaviour may be flat-out wrong. As Kerckhove puts it: type-class polymorphism erodes type safety.

Although I feel like some of these points are also arguments against polymorphism and type inference in general. And most of these points could be solved by better tooling (HLS already solves the first point).

6 Likes

Take my opinion with a grain of salt, but I think having map and fmap being separate is great for people new to the language.

A beginner might have a harder time reasoning about a functor error compared to a lists error when they misuse a given function (:

2 Likes

Evidence from educators on the earlier discussion of “monomorphism is better for new users” pointed to the contrary.

Sure, there is nothing hard in understanding that length :: _Wtf t => _wtf -> Int returns an Int and work on a list. After all, most people manage to program without type signature at all, so ignoring the type signatures altogether works, especially at a beginner level, when tasks are basically LISP exercises (which have been solved for decades without the need of types).

That doesn’t mean that later on, people don’t get bitten later by cryptic error messages. Again, error messages can be ignored. I myself rarely read error messages (but only look at where the error happen).

Yeah, absolutely. I’m just surprised by the evidence in this thread that this is not especially harmful to new users.

I myself rarely read error messages (but only look at where the error happen).

Me too! And I’ve heard that from a few others too. It’s an interesting phenomenon.

1 Like

Imagine that you are a new user who has just started to learn Haskell. You’ve written a few simple functions that work with numbers, and now you’re about to start working with lists. Do you think you’d find Foldable to be inherently more confusing than Eq, Ord, Num, Fractional, Integral and RealFrac, which you’ve already encountered (and had hand-waved away). Introductory Haskell, as it is commonly taught tends to be a confusing mess of relying heavily on type classes, yet not initially teaching them, and “one more” doesn’t noticeably increase that burden.

1 Like

I have also taught Haskell to beginners several times, and I have to agree with the other experience reports that typeclass-polymorphic functions are usually not a big problem. The approach I took when teaching was to introduce type applications early on. We would then use ghci to first type in :t fmap which yields Functor f => (a -> b) -> f a -> f b. We would then look at :t fmap @Maybe and :t fmap @[] to inspect the type signatures (a -> b) -> Maybe a -> Maybe b and (a -> b) -> [a] -> [b]. In my experience this works pretty well.

One of the big problems I repeatedly saw with beginners is punning: The fact that we use both [5] and [Int] and [a] and [x], and similarly for tuples, is a major stumbling block. In order to properly tell them apart you have to be quite fluent in Haskell so that you can always distinguish contexts where a type and contexts where a term is expected.

12 Likes

One of the big problems I repeatedly saw with beginners is punning: The fact that we use both [5] and [Int] and [a] and [x], and similarly for tuples, is a major stumbling block.

  • For lists you could try using the constructors directly: 5:[], x:[], introducing the more convenient syntax when students are writing out lists of length 5…or 'H':'e':'l':'l':'o':',':' ':'R':'e':'a':'l':'W':'o':'r':'l':'d':'!':' ':' ':':':'-':'D':[] .

  • For tuples…perhaps an “it could be worse” approach is the least worst option: start off with the formal product-style notation and wait for the problems with that mathematical punning to appear, then switch to Haskell’s syntax.