My talk "Functional Programming: Failed Successfully" is now available!

Or it is simple and Haskell is infinitely complex.

…you and I just showed (informally) twice that it isn’t. But Haskell isn’t the only declarative programming language - there’s also Prolog; maybe it will be less complicated to concisely and mathematically describe state in that language…

Thank you for your detailed and nuanced response!

1 Like

Not without using primitives.

I’m informally sure that flipping a bit with an instruction of a finite state machine with as many states as you got space for bits is simple.

Sorry (ok, maybe not but I’m being polite) for derailing.

1 Like

So now that we’re back on-topic: why didn’t pragmatism work in 1978? Because if it did work, John Backus’s ACM Turing-Award lecture would surely have been about (some aspect of?) Fortran, rather than a “clarion call” for radical (at the time) change…

I might be wrong for sure. Don’t people expect everyone in the community to be right all the time? Or maybe it’s expected to not say anything unless there is a strong evidence backing claims. If this is so, natural conversations would be too difficult, but I can take this as granted.

I don’t however see how the technical arguments work here. Backward compatibility was not a big concern yet, it breaks constantly for new versions of GHC. It was not a big concern when introducing a significantly more severe change “simplified subsumption” (that was made opt-in, not opt-out after that), and the story of backward compatibility is known to be controversial. If it’s the dot itself, this could be overcome by chosing a different syntax. I don’t mean it should have been so, the dot itself is valuable. But I don’t see how this is a so big problem. So I don’t find this specific case somehow different in the technical terms, and this makes me looking for other reasons. But maybe I’m not an attentive reader who interpreted that epic discussion on GitHub wrongly.

1 Like

This is true for me too, but with a big caveat: I think the group of people who want to use Haskell is much bigger than the group of people who use Haskell (maybe 10x, to take a very rough guess) and the number of people who would want to use Haskell, if they were introduced to it in a way that appealed to them, is bigger still (maybe 100x, again extremely roughly).

If we made an effort to make Haskell more accessible to those groups then we could help them benefit from Haskell, and their contributions in turn would help Haskell! Such an effort would involve improvements to tooling and documentation, and fostering a software engineering culture within Haskell that is accessible to a wider range of people.

I think I agree with OP in this regard, although I differ in how I think the matter is best expressed and discussed.

6 Likes

(I realised after posting that I’m replying to Tom, but this is really a reply to the whole thread.)

IMO, language adoption / “perceived practicality” is not really correlated to some language being “better” than another. It is almost always a function of the language’s ecosystem, though. That is, the number of available libraries for a given task, how well these are maintained and how well they are documented.

I recently attended a talk by Shiram Krishnamurthi, who said that it’s all about education. In his view, a successful ecosystem must be good at education as well.

Hence I would try to take Haskell-the-language out of the equation. There is no way Lang X can succeed if its proponents are bad at teaching others about it, or think that time is better spent explaining why “Lang X is better than Lang Y” and “you really just should try it to see”, because ultimately it is all about good teaching materials for Lang X and its best-in-class libraries.

8 Likes

Hmmmmm. It’s not as if anybody’s stopping people trying out Haskell. Maybe ~20 years ago Haskell was in obscurity, but these days I don’t think there’s any curious programmers who’ve not heard of Haskell.

A lot of programmers aren’t going to benefit from Haskell: if you’re churning out database-to-screen-to-keyboard-to-database applications in a rigid employer-dictated framework, on a 15-year-old application, what can Haskell teach you? Or the benefit will be ‘recreational programming’/hobby projects – in which case see paragraph 1.

I don’t see the point in talking about “contributions”. The continuing problem is there’s tiny capacity to make any changes to GHC or its tooling. Piling more people into the demand side will just make for more frustration. I note the people who are competent to make actual contributions (not me) are too busy to hang out on Discourse.

Some of those people would massively benefit from Haskell. I know because I was one of them. And it’s not about what Haskell will teach (perhaps this is one of the mental blockers: “the benefit of Haskell is that it helps you reach enlightenment”) it’s that Haskell is a massively more pleasant language for churning out database-to-screen-to-keyboard-to-database applications in a rigid employer-dictated framework, on a 15-year-old application.

I’m not sure what this means. It seems obvious to me that more competent people in the ecosystem will lead to more progress on the things we all benefit from.

Could you elaborate? @sgraf is posting above you, for example. @mpilgrem maintains stack and participated in this thread. Many people who work on GHC, Cabal, stack, GHCup and HLS are here regularly. So I must be misinterpreting something in your comment.

3 Likes

Hugely disagree with this point and this logic.

First, why are we deciding for some unknown persons what will or will not be good for them? We should treat people we don’t know about as adult persons who can make decisions for themselves. If we are rational, we just can’t make such decisions: “there are people who would not benefit, so we should not do anything”. The best we can do and should do is making more opportunities for all, and those who need, will use these opportunities.

And yes, the more people in Haskell, the more benefit it is for all haskellers.

4 Likes

So what is something that has “mass appeal”, something that can be read from the side of a box at a computer shop?

…because these days, who doesn’t want to “get more stuff done” simultaneously? Moreover:

Exactly.

The choice of non-strict semantics by default, and consequently purity now places Haskell at an advantage over most other “step-by-step-by-step” languages, where historically parallelism and concurrency have been thoroughly muddled-up. But as the existence of ParaSail shows, nothing stops new parallel languages from appearing. So for Haskell to be synonymous with parallelism means taking the necessary measures now, while (most of) the rest of the competition are still deciding how to “add-on” purity e.g:

(…let alone non-strict semantics ;-)

EDIT: Sorry, the following was meant to be a reply to this post by @graninas; I must have clicked on the wrong Reply.

I’ve tried to read or, at least, get an impression of, the almost 540 comments on the RecordDotSyntax GHC language extension proposal between its being made on 11 October 2019 and the announcement of the GHC Steering Committee’s conclusion on 3 April 2020. I also read the public emails of the GHC Steering Committee on the proposal between 9 December 2019 and 3 May 2020.

Based on my own impression of that process, I would say that it is not a good example of insufficient pragmatism as a perceived Haskell community value (recognising that the community is diverse; I’m not saying everybody involved in that process always took a ‘pragmatic’ approach; and also recognising that ‘principles’ are important too).

However, near the conclusion of that process, there was a passing comment by Simon Peyton Jones that, for me, did chime with your thesis. He wrote: “… We have waited a long time already – I have been engaged in debate about this topic for over two decades – and I think it’s time to decide something. …”. By ‘two decades’, I understand (EDIT: from this email, preparing for a vote) him to refer to a paper that he had written with Mark P. Jones in 1999 entitled Lightweight Extensible Records for Haskell.

That said, a significant part of those two decades would have fallen before ‘Haskell escaped from the Ivory Tower’.

No I wasn’t “deciding for” anybody. I’m making a prediction for what decisions they’ll make “for themselves” after they’ve played with Haskell – based on having worked amongst programmers and commercial applications for decades. (Commercial applications that typically take the user row-by-row through the database, so will show no benefit from parallelism pace @atravers’ claim for a “mass appeal”. [**])

Yes, that’s what I said Haskell is already doing paragraph 1. You can lead a horse to water, but you can’t force it to drink.

I think this discussion has got to the point of repeating itself.

[**] Just what proportion of the industry is turning out PC games? And don’t they and the players have something useful to do with their lives?

Phew! You’ve gone through all that discussion? Epic!

Yeah thanks, but no: earlier than that. Haskell 98 records was very much at the time seen as a stopgap because they had to put something into the standard. There was already in Hugs Trex 1996 [3 below]/[5 in the 1999 paper]. And Trex continued to be developed in Hugs up til ~2004. The 1999 paper essentially proposed to abandon H98 records and adopt Trex. This would have been a majorly breaking change. But in an era when Haskell users were a tiny ‘ivory tower’, and much more tolerant of breakages.

Errm coming in a thread titled ‘Failed Successfully’ I’m afraid I’ve lost track of the double/triple negatives going on here. Who was pragmatic? Who was purist? Who could have been more pragmatic?

@graninas’ claim was that one side of the debate wanted to implement . to be like OOP; the other side wanted to not implement . because it would be dumbly aping OOP – and only for that reason, like Haskell must keep itself aloof from other languages. My memory (without going through the whole thread) is nobody particularly was suffering from envy or jealousy of OOP. Rather it was: Haskell has got itself in a mess with . [**], can we find a compromise syntax/lexing that allows all existing usages to co-exist (backwards compatibility) and also this new syntactically-specific usage?

@graninas suggested they could have proposed a different operator for field access – except what? Any symbol from user-space might be already taken, so again breaking backwards compatibility.

[3] B. R. Gaster and M. P. Jones. A polymorphic type system for extensible records and variants. Technical Report NOTTCS-TR-96-3, Computer Science, University of Nottingham, November 1996.

[**] because . is just a terrible symbol to use for such a common operator in lambda-calculus as compose °; because . is used for all sorts of purposes, including decimal separator and Module namespacing.

…because modifying a shared resource does requires concurrency, which would help to explain why ParaSail has no global variables. Interestingly, this appeared in one of my searches yesterday:

Lazy Evaluation of Transactions in Database Systems (2014)

…so given the appropriate circumstances, maybe “read-only” transactions can occur in parallel.

But more generally…unless someone discovers a way to use e.g. tungsten carbide as a semiconductor, the future is multicore/threaded, and “straight-track” sequential programs will have to be adapted accordingly.


I’ve mentioned it before: Unicode didn’t exist back in 1987…

Of course I get that. . doesn’t even look much like °. But that’s degree sign (as I used above), which isn’t quite right, should be at mid-height (pasted from wikip, I’ve no idea how to get it on my keyboard). Then @ at least has a circley thing at mid-height; and is a kinda reserved sym in Haskell but can’t currently appear in expressions.

. is used in math to mean multiply/also dot-product (at mid-height wot also we don’t have in ASCII) or to denote some arbitrary binary operation. Just already too overloaded and too precious to use as a vanilla operator.

Yeah. The database is the global variable – and not just global to your program/all its threads, but global to every other user/program on the network. And updates to it need to be under commitment control, to avoid any other session seeing it in a half-updated state.

Sure, that makes sense where most activity is enquiries. With optimistic locking for updates. We then need all sorts of double-check and rollback strategies in case of the database having changed after my user looked at it but before they entered their update. Did somebody above say "There is nothing “simple” about reassigning state. "?

…yes, I do vaguely recall something being written to that effect - now what did I write again:

So a ParaSail library which does access a shared resource such as a network-wide database cannot use global references/variables for that purpose - it would need to work differently. But however it did work, concurrency would still be required because:

There is nothing “simple” about reassigning state.

…when it’s shared so vastly as a resource on a network, or just shared within a program.

Interesting idea. Let me tell how it works in the real world: the fastest-moving product at the busiest store is also the data point that gets the most enquiries – so by this lazy update strategy will suffer the worst read-latency.

A more realistic strategy is to remove as many of those data integrity ‘promises’ as possible. Typically, to allow the stock-on-hand balance to go negative (even though that makes no sense physically), in the expectation delayed transactions will make up for it. Of course this then needs a human follow-up procedure to enquire on negative on-hands and go physically examine what’s on the shelves.

Amazon, for example will happily sell you some dead trees and take your money with no idea whether it can ship to you – either within their promised delivery time window or ever. They’ll then take their time using your money before giving it back to you because product undeliverable.

In this scenario, enquiries on the fastest-moving product at the busiest store are next to useless. The system might as well make up a number as force all that read-latency. (What I’d record is the date/time of latest reasonably confident stock level; plus a metric for how fast-moving; then guess a number by applying the decay metric over the intervening time interval.)

So a lazy database update might work. But you’ll have to redesign the whole user-facing business logic/manage user expectations. The sequential or otherwise programs are not really the place to tackle it. I don’t see Haskell vs (say) COBOL really having any bearing – to explain which is why I’m going so deep into the weeds on this thread. And I wouldn’t bet my stock control on an application infrastructure for which there’s only a handful of (rather too purist) programmers in the country. That’s why I say

Perhaps I mean: a lot of employers of programmers aren’t going to benefit from their employees grokking Haskell. Especially not if it makes their employees as argumentative as us lot round here (myself included :wink:).

1 Like