Aesthetics, vocabulary and the idiomatic

If you only count lines of executable code (ignoring imports, pragmas, module declarations, etc.), they’re quite comparable.

And while there are differences in style at work here, I think some differences are exactly the sort of thing I was talking about. Look at this:

Exactly the same effect, one achieved by importing the best word from a module that already defines it, one achieved by expressing the idea by building a larger expression from a more limited vocabulary. You don’t need to extrapolate anything to 10 kilo-LoC to see why one might be more compelling than the other.

Another:

One solution loads a list of tuples into a map once and uses a pre-written function to find, in constant time, the appropriate element. The other scans a list of tuples every time it needs to do a lookup, with a custom scanning function. The asymptotics probably don’t matter for this toy problem, but I know which I’d rather see in a code review.

One last example, though it’s no longer comparing the things you asked me to compare: your solution contains

The other solution inlines the lexing and the summing into the same scanl, instead of separating them out as you did, and I prefer the overall shape of your approach. But I’d have read this expression much faster if it were written

I’ll freely admit that biliftA2 isn’t a very common vocabulary word! But the nice thing about this approach is that even if you haven’t seen biliftA2 before, you can pick it up pretty easily from context, especially if you know what liftA2 does. You’re scanling a [(Int, Int)], using an initial (1, 1), and somehow addition is being invoked twice per element; shouldn’t take too many guesses to figure out what that code does, and—bonus—once you’ve done so, you’ve learned a new vocabulary word (and maybe more than one if you go look up biliftA2 and get your first introduction to Biapplicative and Bifunctor that way).


Library functions exist not just to save the programmer work, but also to save future readers of the programmer’s code from having to relearn everything from scratch. Every vocabulary word you share with another project is a concept that someone reading both projects only has to learn once instead of twice. Learn them hungrily and use them liberally!

7 Likes

…but not too liberally:

Thank you @rhendric, your post is very useful to me.

I guess the benefit of this would (a) it may be a more optimal implementation, and (b) you’ll use it more than once :slight_smile: The cost is the code your project now depends on which you may not use. I never really have used small libraries besides FFI bindings. I guess I prefer to know what it does when I can.

I think if you wanted to understand the whole code, you’d need to go find out what all the imports are doing unless they were already familiar. I guess its a preference in the end, but in other languages (e.g. Python) I strongly prefer solutions which rely solely on the standard library.

Very nice :slight_smile: I have immediate uses for this. I also find it easier to learn the theory of these things with tacit experience of them beforehand. Also the IntMap, very useful!

Re-learning happens always in my experience. The only thing I’ve seen proportionally reduce the time code takes to grok is writing less code :slight_smile:

less vocabulary used means more comprehensible

I don’t think I would agree with this as a blanket statement.

“Vocabulary” exists for a reason, and often (though not always), that reason has to do with maintainability. It’s usually a tradeoff - introducing an abstraction, concept, or syntax construct adds to the knowledge required to understand the code, but it also helps convey intention or structure, and that reduces the brain footprint of figuring out what’s going on.

And that tradeoff depends a lot on the target audience. For a beginner in the language, the knowledge requirements are a major factor, because there’s only so much you can cram into a brain at a time, and using “too much vocabulary” will quickly overwhelm you. But someone more familiar with the language and its idioms will already understand a lot of “advanced vocabulary”, so the mental overhead is small, and the benefits of expressing intent more clearly outweigh that cost.

4 Likes

I guess that’s generally true about vernacular. For example, technical people speak with each other in technical terms: they could speak with each other in non-technical terms about technical things, but it would be less efficient.

That still implies – in my opinion – that the technical vocabulary is communicating more with fewer symbols (given more assumptions). Given say 10k+ LOC, it would be strange to me if a wider vocabulary resulted in more code than would have been necessary with a smaller vocabulary, ceteris paribus.

“Vocabulary” cuts both ways. Even though we can express the gamut of ideas using only a very limited subset of the available vocab, we cannot always do so effectively.

In the natural language setting, Randall Munroe’s Thing Explainer (incl. Up Goer Five) is a good example of this. Are these explanations in simple terms always “best”, even when they are relatively short? I would argue no.

Haskell is a language of abstractions. By building out functionality into a shared, digestible set of abstractions, we allow ourselves to forget the details and trust each piece in turn.

Once you go past a certain level of complexity, this is a huge benefit. A downside of people interacting with the language for the first time-and a difficulty that we have as educators-is that you’re not likely to come up against that level of complexity for quite a while.

2 Likes