Clarifying dependent types

Lots of these questions can be clarified by reading the relevant GHC proposals, both accepted and in-discussion, particularly Design for Dependent Types but also others like Modern Scoped Type Variables.

…or maybe just cautious:

  • from what I’ve seen of it, the official arrival of dependent types in Haskell would be “a turning point” for type systems, much like the inclusion of type classes all the way back in 1987. As every experienced Haskeller knows, there’s at least one problem with those which seems to elude a practical solution - the somewhat pejoratively-named “orphan instance” (of course, there are probably others). Who knows what new and equally-as-obstinate problems will appear if Haskell “goes dependent”

  • At least one recent language arose from the frustration regarding the long build times for one older language in particular (IIRC, not Haskell; it’s more commonly categorised as being OO) as a result of its type system. There have already been some “memorable” complaints about this in Haskell builds - it would be unfortunate if dependent type aggravated matters further, to the point of people seriously contemplating an all-new nonstrict functional language without dependent types (and perhaps type classes!)

  • For better or worse, Simon Peyton Jones has ended up being “Mr Haskell” (the language, not the person) - if your name was equally as synonymous with an endeavour so widely-known, would you not be more than a little cautious about a similarly consequential change?

But these are just my opinions: let’s see what @simonpj says…

2 Likes

I would describe myself as cautiously supportive of adding dependent types to Haskell. Let me say in a bit more detail what I mean by that.

  • A dependently typed language is not one well-defined thing. There is a huge design space, with many interesting tradeoffs. That is one reason for caution: in a language like Haskell that is used widely in production, it makes sense to make careful, incremental steps rather than that a big-bang jump to a destination that we may live to regret.

  • I am a passionate advocate of static type systems; as I often say, static types are by far the world’s most widely used and effective formal method. Haskell is a laboratory for exploring the bleeding edge of what you can do with static types, and I am proud of that.

    But a key feature of static types is that they are comprehensible to programmers. I think that is true today – but only just. A reason for caution is that we must not make Haskell so weirdly sophisticated that no one understands it any more.

    That said, there is one way in which adding dependent-type features arguably makes the language simpler, not more complicated, becuase terms and types become more alike. To take a different example, adding kind polymorphism was a complication in one way, but it made Haskell simpler in another: we can write polymorphic functions at the term level, so why can’t we do so at the type level?

    But the details of full-spectrum dependency are gnarly, and so again I am attracted by making incremental progress: adding features that may be inspired by the North Star of dependent types, but which each make sense individually, and improve the lives of programmers.

  • Speaking of a North Star, I was a co-author on A design for dependent types proposal, which has been accepted by the GHC steering committee, a decision that I supported. It lays out a design sketch for adding more dependency to Haskell. To quote: “This current proposal aims give some detail to what dependent types may look like. Acceptance does not commit us to an unalterable course of inclusion of exactly the features described here. But it would suggest that other proposals in this space should be designed in such a way that they are compatible with the details presented here.”

    So the specifics will be the subject of future, more detailed proposals, but we do have a public, agreed direction of travel.

  • As the proposal says explicitly in Section 4.11, I am not yet signed up to Glorious Vision of collapsing types and terms into a single level. I’m not definitely against it either, but I am not yet persuaded that the benefits would justify the costs. In particular, the costs to GHC’s implementation, which treats types and terms quite differently, would be very considerable. (This cost/benefit tradeoff might look very different in a language whose design, and compiler, embraced full spectrum dependency from Day 1.)

    Perhaps I will be persuaded in the future. For example, it took some time for Stephanie and Richard to persuade me that TypeInType was a good idea, but persuade me they did. For now, it’s an area of caution for me.

    Meanwhile, I think we will get huge value from debating, adopting, and implementing the earlier design changes sketched in A design for dependent types. Indeed we are well advanced along that path already.

  • I’m a big supporter of Liquid Haskell, and I’d love to see enough effort invested in it so that it can come “in the box” with GHC. But that’s a big job, and more people will need to get involved. Do join the effort!

Simon

8 Likes

The caution you describe is fitting, particularly because there are so many unknowns, and the cost of reversing an implemented decision is not to be taken lightly. Similarly, given that much of the challenge of figuring this out requires significant experimentation and exploration, I think the thing that is most-often missing from the discussion around DH, is how to best “cushion” (or isolate) the experimental/exploratory future world of DH from day-to-day, non-experimental, production use of Haskell (and the ecosystem of packages/documentation/etc surrounding that). And to get this out of the way up front: while our language pragmas are great, wonderful, etc, I think there’s an argument to be made that they are not a sufficient enough cushion or mechanism, for isolating a multi-year “we’re trying to figure out what the best implementation looks like” from “we know this works well”. It’s been mentioned elsewhere by other people, but the solution is probably in the realm of some sort of LTS-type of social/technical agreement and implementation. Personally, I think that includes a standard lib/prelude and documentation. In other words, we need to define what pragmatic, correct, reliable Haskell looks like, and we need to allow space for “figuring out what we don’t know about DH”, isolated in such a way that you can get in and dirty if you want to, but aren’t pulled into the muck if you’re looking to stick with what’s agreed as “fit for production”. When we’re ready to call DH “production-ready”, we flip the switch / bump the LTS along with the docs, standard lib, packages, etc. I think it is a challenge to figure out what that looks and feels like, but given what’s at stake, it’s not a challenge to shy away from.

:The surface syntax proposals for DH are largely finished, so per Wadler’s law I am hoping progress can pick up speed again :).

The most important parts of the DH work should be the most uncontroversial: current kludges like type families have very terrible semantics making them extremely frustrating to use, and whether or not they become just functions or not it would be prudent to:

  1. Reimplement them with better semantics
  2. Share underlying notions / implementation of partial evaluation between optimizer and type checker.

Greenspun's tenth rule - Wikipedia is

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

I would argue we are in a very analogous situation where

Any implementation of a sufficiently complicated piecemeal generalization of Haskell contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Dependent Haskell.

So the really important thing is to fix that: GHC Haskell is a nice enough language but GHC the implementation not in a healthy state as far as its software engineering is concerned! A strong effort to shore up the internals of GHC should be an excellent approach to avoiding bike-shed controversy, and get us closer to Dependent Haskell purely as a side effect!

5 Likes

I’m glad to see this is recognized.

My doubts about DH aren’t so much about the features itself or the implementation cost. They are rather practical: what will people do with it?

I’m a big fan of F*, which does have dependent types… but is also a rather specialized language for specific use cases. Additionally, it allows to extract F# code. For a language that has such a specific use case and transpiles to something else, there’s less caution to be had.

For Haskell, I’m specifically wondering about:

  • if depedent types will eventually be used in base, what effect will this have on newcomers?
  • will I be exposed to type-spaghetti in every second library? What will this do to APIs?

I think these questions are really hard to answer and I understand they are probably not much considered when designing DH, because it’s really difficult to say.

That said, I’m much more a fan of liquidhaskell or the idea to just write a superset language of Haskell that transpiles back to it. Those things are clean and totally opt-in. Once this lands in Haskell proper, it will no longer be opt-in, even if it’s behind a language pragma.

The challenge with sequential incremental steps is that it may never lead to a satisfying design as a whole.

3 Likes

This is interesting to hear. Could you expand on that? (Perhaps the best place would be a Discussion on the ghc-proposals repo.)

1 Like

@tomjaguarpaw I wish I had a better thing for you to read squarely on this topic, but for now I might recommend this reply from @hsyl20 Back-end responsibilities are distributed throughout the compiler (#20927) · Issues · Glasgow Haskell Compiler / GHC · GitLab on a more narrow issue that nevertheless discusses large issues. This isn’t terribly near the Dependent Types side of things, but conversely discusses violations of what most Haskellers (and even not Haskellers) think is good software engineering that are more flagrant / clear-cut.

For me, the grand narrative here is basically what one gets from synthesizing the patches and writings of @hsyl20, and @int-index (in addition to @rae).

  • We can work from the bottom looking at the code, ignoring what it is supposed to / trying to do (i.e. the language itself) and work our way up.

  • We can also work our way from the top down asking why aren’t “Milner’s coincidences” and dustier corners of the language broken up so we have fewer, more orthogonal features.

I really believe at best this is all the same fight and goals: expressive power is never the goal but the icing on the cake one gets as a result for pushing for modularity and compositionality.

The right thing to do is bring more attention to whatever approach ought to be uncontroversial with enough scrutiny. DH is, as I said, hopefully about to get less controversial as we escape the clutches of Wadler’s law. But the awkwardness of internals of GHC as described by @hsyl20 should be even less controversial if only people new. The challenge is a survivorship bias where:

  • Most Haskellers of course just use GHC as a black box.

  • Perhaps large “silent” group of Haskellers try messing around with the GHC API a few times, but are frustrated by its poor shape. Perhaps they think “surely it doesn’t need to be this bad”, or perhaps they merely internalize “compilers are hard” and think the problem is their own skills and not the hostile interface.

  • Those that stick around working on GHC tend to spend a very large portion of their time working on GHC and not other projects. Whether they never minded these issues to begin with, or gradually become accustomed to these issues (and unaccustomed to how things are done “elsewhere”) is besides the point.

6 Likes

Lastly, I do believe that the implementation details of GHC to subtly but profoundly shape the “Overton Window” of library, language, and other “ecosystem” design decisions. I suppose this belief is implicit in all my comments, but I think it is good to spell out explicitly.

For this reason, I think folks that are perfectly happy just using GHC as a block box executable (and therefore not directly caring about how it is implemented in the slightest) still should care about the state of things “under the hood”. The state of things will come back to affect the issues you all do in fact directly care about by influencing their Overton Window.

I think it would be a great service to the community if you could write all these details up into an article. It’s great that you have perceived them, but of much greater benefit to the community if you can share them!

2 Likes

Yeah I hope that will happen too! Hopefully I can merely co-author :).

I honestly do not understand the type spaghetti argument.
Dependent Haskell would likely reduce existing type spaghetti in libraries, by e.g. allowing partial application of functions at the type-level. I don’t see how making types easier to manipulate will mean that abstractions that are possible now will instead be implemented in an overly complex manner.
However, I do think that we will see complex abstractions that simply weren’t possible before.

2 Likes

(code/type) spaghetti isn’t a result of lack of expressivity. It’s the opposite.

This is as much a social experiment as it’s a technical one, but I hear only discussions about the latter.

Quoting from https://artandlogic.com/2021/08/exploring-dependent-types-in-idris

As the code that you’re formally verifying becomes more complex, implementation quickly goes from easy, to time-consuming, to an active research area.

We’re creating a new language, incrementally, that will not be anything like what Haskell was, if it succeeds. Maybe that’s good, maybe it’s not. That will depend on the users of the language. Who will buy into it? How will they use it?

This has never been done before in a mainstream language, has it? Yes, I consider Haskell mainstream.

Many Haskellers can barely keep up with all the tools that we already have, see Stephen Diehl’s great compilation.

Dependent types being entirely opt-in would solve this issue. But again: a language pragma is not opt-in. I’m not saying I know what’s best, I’m just raising these concerns.

1 Like

@hasufell your arguments make sense to me in the abstract, but do not fit the facts as I see them.

We’re creating a new language, incrementally, that will not be anything like what Haskell was, if it succeeds. Maybe that’s good, maybe it’s not. That will depend on the users of the language. Who will buy into it? How will they use it?

No, that is what we’ve been doing the past 10-20 years. The current hodge-podge of features vaguely gesticulating in a Dependent Haskell direction. The direction of gesticulating is not the problem. The hodge-podge doing the vague gesticulating is.

This has never been done before in a mainstream language, has it? Yes, I consider Haskell mainstream.

C++11 made a lot of new stuff. The ickiness vs something like Rust was that there was no stomach for getting rid of bad old stuff. We thankfully don’t need to get rid of anything! Certainly not in Haskell 2010. I would advocate getting rid of type families, but this is purely optional; we can support them along-side the better alternative indefinitely.

Many Haskellers can barely keep up with all the tools that we already have, see Stephen Diehl’s great compilation.

That’s because we have the hodge-podge. Once we unify the current mess into Dependent Haskell, it will make more sense not less!

Dependent types being entirely opt-in would solve this issue. But again: a language pragma is not opt-in. I’m not saying I know what’s best, I’m just raising these concerns.

On a technical level it sure is opt-in. You mean on a social level? Sure, but we already have that problem today with the hodge-podge. If we have that problem regardless, I don’t see this as an argument not clean things up.

2 Likes

From my limited experience with F* and liquidhaskell, I wholeheartedly disagree. It’s a great tool to have these things, but it also adds a constant layer of additional reasoning to your everyday programming. Especially if it’s more powerful and natural to use.

No. On a technical level. Exposure to APIs will do that.

From my limited experience with F* and liquidhaskell, I wholeheartedly disagree. It’s a great tool to have these things, but it also adds a constant layer of additional reasoning to your everyday programming. Especially if it’s more powerful and natural to use.

To be clear, I was referring to the hodge-podge of features within GHC, not liquidhaskell. Dependent Haskell doesn’t subsume liquidhaskell any more than GHC Core subsumes type classes.

No. On a technical level. Exposure to APIs will do that.

For point, the fact that GHC doesn’t attempt to audit whether imported interface relies on disabled extensions is a travesty that should be fixed.

I am hoping Remove NoGhcTc, allow HsType GhcTc, HsDecl GhcTc (#18758) · Issues · Glasgow Haskell Compiler / GHC · GitLab will enable that. So long as just remember module interfaces with core types, we are mostly prevented from understanding the language features involved. Once we properly distinguish surface language vs core types, we can also store the former in hi files. This will greatly improve this and many other manners of ergonomics.

We are indeed coming up on 10 years of fears about DH.

The Committee has decided to proceed with DH. Can I suggest we all just shut up and stop distracting those who understand DH and are competent to implement it.

I don’t understand. (I’d describe myself as hostile to DH – or at least hostile to all the proposed syntax that’s growing like a carbunkle on the face of a dear friend).

Clearly I’m too stupid to understand. Probably I’m too stupid to ever write DH. (Although I’m happily using FunDeps to achieve what seems to be the same outcomes – and yes we’re coming up 20 years on using those techniques.) Perhaps I might benefit from packages written using DH. But I’m never going to find out unless it actually gets delivered.

So just stop the hubbub.

1 Like

Are we really sure “the current hodge-podge of features” is going more-or-less in the direction of dependent and not dynamic types?

If some new language were to appear that had:

  • Erlang’s features; at least the one which could be supported alongside non-strict semantics (including being dynamically typed)
  • Haskell’s syntax; again those which had a one-to-one correspondence with Erlang syntactic features;
  • a source-language “verifier”, analogous to what Sun devised for Java bytecode, to detect obvious errors like finding the square root of a string;

…could it be seen by some as a potential [dependent] Haskell replacement?

Edit: @AntC2’s message has already clarified that DH is the future of Haskell, so I’ll stop with my hubbub on this topic as suggested: look before posting in case someone else was “faster on the keyboard”

Good question. When I see examples of DH that seem to make types depend on values that can’t be known until run-time, that reeks to me (in my not-understanding) of dynamic types. Divide by zero, numeric fixed-width overflow, subscript out of array (when the subscript arrives at run-time). It’s just perversion to re-label those as type errors: type-assignments/type inference/type-safety are what you get statically at compile time. If a program crashes at run-time because of divide-by-zero, don’t tell me that’s a type error, because type errors don’t happen at run-time.

But again, we’re not going to find out unless we let the people get on and deliver it.

1 Like

Uhm I don’t think typical dependently typed programs would call divide-by-zero exception type error. Coq does not allow such division to happen, and I believe it is the same for Agda/Idris too. Yet they still have the concept of “values not known until runtime” - which is the core feature of DT (depending on unknown values as variables).
It’s just that some computation will happen in compile-time, and give errors if desired constraint is found hard to meet. This kind of errors never happen after compilation into machine code / anything else.