Single-compiler vs. multi-compilers (or. implementation-defined vs. standardized) languages

I believe (and that’s my personal perspective) that Haskell has given much to the PL research community and has reached the bottom of the feature fountain. Nowadays, exciting new languages / research platform seem to be more oriented towards a mix of proof assistant and general-purpose language (Idris, Lean, Coq, Agda). The 2020 decade might be the one where we cement our legacy and bring it to the wider public, and leave the cutting-edge research to other languages that can innovate in more radical ways without hurting their user base too much.

Edit: Since agda2hs appeared in the landscape, I have been toying with the idea of having a mix of Haskell and Agda code in my systems, but ultimately running Haskell programs in which some parts would have been generated from Agda. But I don’t believe we should invest any resource to make Haskell a dependently-typed language for the masses at this point.

2 Likes

I don’t believe we should invest any resource to make Haskell a dependently-typed language for the masses at this point.

I’m not sure who you include in this use of “we”, but the company I currently work at—Serokell—is investing resources in exactly that. To be more precise, the form in which they are investing is paying my salary, as implementing dependent types in Haskell is basically my job. I would like to ask you not to try to change their minds about this.

Besides, the GHC Steering Committee has accepted the Design for Dependent Types proposal. There’s huge demand for dependent types in Haskell (there’s also some resistance, but it doesn’t cancel out the demand).

6 Likes

I don’t think I was aware that you were working on this topic. :slight_smile: Is there a focus on the ergonomics of the system, or just an implementation? Is there a roadmap that can be read somewhere, or is the design proposal your roadmap?

1 Like

Ergonomics are the entire point of the initiative.

We already have a way to model dependent types in Haskell using a combination of defunctionalized type families, data type promotion, GADTs, kind polymorphism, singleton types, etc. But this encoding incurs a heavy cost—both in terms of performance and in terms of ergonomics. My goal is to lower that cost as much as possible by adding direct support for dependent types to the language.

There’s no roadmap per se: instead of a specific sequence of steps we have a vision for the final destination and a bunch of ideas how to get there. Each step goes through the GHC Steering Committee to ensure its quality and that there is demand for it in the community. The nearest milestone is implementing the accepted GHC Proposal #281 “Visible forall in types of terms”. Then we will see where to go from there.

1 Like

Fantastic then, I would be delighted to have my opinion changed with your work :slight_smile:

When it comes to these sort of things I feel people only see the language part of the problem.

Ergonomics are not simply technical properties of a language. As the ecosystem adopts strategies and tools, ergonomics become an emergent property of how people understand (or don’t understand) a certain feature.

The design space of DT is so huge that I believe there’s no non-research language that has proven that it works well across an entire ecosystem.

This is another of those points where having two different compilers may be reasonable. And as a result keep DT out of the language spec and opt-in. But it doesn’t seem we are going that route.

Haskell was originally a consolidation of efforts from multiple statically typed lazy functional languages. There were once implementations such as Hugs, NHC from York, and UHC from Utrecht in addition to GHC. The Haskell language report defined a programming language that was reasonably portable between these systems, though each had their own set of interesting extensions. Our roots as a standardized, multi-implementation language still guide a lot of the structure of the community.

What changed? Part of it is that GHC got a lot better at the aspects that the other implementations had previously had going for them (e.g. GHCi started existing in version 5.00, while folks used to use Hugs for the REPL and GHC for batch mode compilation). Part of it is that other implementations fulfilled their research programs and the authors moved on. Part of it is that the actual Haskell found in the world became more complicated, relying on more language features, and expectations for a language implementation increased over time, making it more and more work to keep up.

2 Likes

The design space of DT is so huge that I believe there’s no non-research language that has proven that it works well across an entire ecosystem.

Which non-research languages have attempted DT and proven that it doesn’t work? The closest I can think of is Idris - where DT has proven to be a massive success.

I do think this is a primary concern with DT - but that is why it’s so critical to comment on ghc proposals that attempt to add new DT-like features to Haskell.

For example: Type Functions is an attempt to introduce staged closed evaluation of terms to Haskell’s type system. The feedback is quite positive - the critics of the proposal have even been swayed! There seems to be no resistance…

People often complain about “DT”, but many don’t seem to complain about specific DT features such as visible dependent quantification and type functions :slight_smile:

1 Like

Idris is a research language in my book.

I’m massively intersested in DT (I find F* extremely exciting and it has actual use cases), but I’m not interested in using DT in Haskell. So I feel there’s nothing to comment. I simply want it opt-in (or not at all).

Since we don’t have competing compilers, we also can’t really have a balanced argument about these things, because GHC is the champion and whatever they decide becomes ad-hoc Haskell.

2 Likes

I know very little about dependent types, but the Motivation section with the sizeOf Bool (vs sizeOf @Bool) example made it very clear why that proposal would make things more ergonomic – I’ll bet I’m not the only one who found @Bool and Proxy confusing and unaesthetic the first time around, or got confused by the error messages around those things. I’ve bumped into them enough without knowing they had anything to do with dependent types. A solution to this sounds like it will be worth a lot.

(I wonder if people have been made more resistant to DT due to these kinds of experiences in Haskell.)

It isn’t just DT:

New type of ($) operator in GHC 8.0 is problematic

…Haskell: now the world’s finest imperative intermediate language?

Some time ago, there was an expression of interest in “embedding Elm in Haskell”. My final remark on that topic was to suggest adding laziness to Elm instead - to me, it seemed a simpler task.

Since the likes of Agda, Idris et al already support [the complexity of] DT, that same suggestion would also seem to apply - it would be easier to implement laziness in Agda or Idris, instead of retrofitting DT into Haskell, as the extra complexity of supporting nonstrict semantics would largely be confined to the implementation.

But perhaps the “dependenting of Haskell” may be a “rip-roaring success” that inspires the embedding of Elm into Haskell: anyone for elm-notation…?

Have you tried a full DT language and observed its errors when you got your code wrong?

Those show case examples of how DT improves ergonomics are 5% of the story maybe. You’ll generally spend a LOT more time staring at the screen when interfacing with DT API that you didn’t write yourself than with current APIs.

And in my opinion, the most interesting use cases, where I would take the cost of complexity, are uninteresting in Haskell anyway (such as cryptography).

DT reducing complexity of the surface language is generally a false statement. It will make it magnitudes more expressive. Expressivity doesn’t come for free. And although it may make some specific use cases easier to express, we won’t be able to remove type families and friends from the language ever. So there’s no cleanup effect for the surface language. Instead you just got more choices and more expressivity.

3 Likes

I don’t see why not with a proper deprecation cycle spanning many GHC releases.

But anyways the extensions will fade into obscurity like NPlusKPatterns and DataTypeContexts. Nobody really complains about those extensions cluttering the language any more.

1 Like

That would be a disaster. Are you going to tell companies maintaining millions of lines of proprietary code to “just” migrate to DT?

This is exactly why I hope we will get a conservative GHC fork.

…or maybe just a “partial fork”, by just keeping around a pre-9.0 version front section, which would continue to be compatible with the middle and back sections for version 9.0 and beyond (assuming the Core representation changes less than Glasgow Haskell).

However:

…something which apparently is still happening:

This “leave it to GHC” thinking is a problem - if everyone keeps assuming someone else will always volunteer to keep Glasgow Haskell Central up and running, what is the incentive to actually stay around? There’s always more interesting things (e.g. research) to move on with.

(…the HF may soon need to employ many more @chreekat’s! Will there be enough left over from $500k/year to pay for all those extra professionals? But who knows - maybe the advent of dependent [types in] Haskell will result in the programming masses taking fresh interest and joining in, perhaps even starting a new Haskell implementation…hey, it could happen :-)

I’m not going to try to argue against that claim: I haven’t bothered to ‘resist’ because I’ve more or less abandoned GHC post-8.10. And the chief reason for that is because I see no benefit for me in DT: it’s just a burden of complexity.

The design for Visible Dependent Quantification seems all wrong — even if I could see a use for the feature, the syntax is un-Haskellish. I said so long and loud. I was ignored. I’ve given up.

Type Functions come with some nasty hidden gremlins. I find I can do just fine with FunDeps.

1 Like

You’re mixing up several things there; also those docos are fibs: there’s no proposal to change sizeOf from what’s in the Prelude today.

sizeOf never did use Proxy. What we need IMO is a way to say that in sizeOf False or sizeOf (undefined :: Bool) or sizeOf (type Bool), sizeOf is interested only in the type of its argument. The trouble with sizeOf @Bool — which anyway isn’ going to be supported — is the @… is optional: there’s no way to insist a type argument be provided.

If we want a type argument to be always provided, I see no reason for insisting it be the first arg following the function; whereas @… or a visibly dependent type arg must come first. So I find these first exemplars of DT to be just _un_ergonomic, poorly designed and unHaskelly.

Again I’m saying this not to try to reverse those proposals, but to counter the attitude that DT has ‘won’ or has persuaded everybody. What’s worse, DT seems to be the only part of GHC that’s getting any attention (as opposed to, say, providing a records system even as not-totally-embarrassing as purescript‘s). So I see no downside in stopping at GHC 8.10.

1 Like

I see no reason for insisting it be the first arg following the function; whereas @… or a visibly dependent type arg must come first

This is simply not true, consider for instance:

f :: Int -> Int -> forall a. Num a => a
f x y = fromIntegral $ x + y

summ = f 2 3 @Double

Yes, this uses RankNTypes. But the point is, the hard requirement is only that type arguments precede the value arguments that reference them, and to me that seems very reasonable.

3 Likes

(It was more VDQ I had in mind, but @int-index said much the same in discussing that proposal.)

So those who like that sort of thing will find that is the sort of thing they like.

I’ll explain my “un-Haskellish”:

  • Haskell used to work like lambda-calculus: I can partially apply functions; I can flip arguments; I can define a version of a function with arguments rearranged;
  • Type inference used to mean that if a tyvar appeared in multiple places in a signature, unification from terms (including those with a type annotation) would resolve to a single type, without me having to worry about order of appearance.
  • Do I have a use-case where I’d prefer the type pseudo-argument after the term? Yes: consider a function with an argument that’s a collection. Usually the collection is non-empty, so the compiler can get the element type from there — then the pseudo-argument can be a dummy. But just in case it’s empty, I want to be able to supply the element type explicitly — in a way that looks more like a type annotation. Using @… — even suffixed as you show — is not a required argument.

I’ll be told that if I don’t want VDQ and friends, I can leave the extension switched off, and my Haskell intuitions still stand. That’s what I’m doing: leave all v9 extensions switched off by sticking at 8.10.