Need feedback on pattern signature behavior

@maxigit So the original 1990s Haskell didn’t have higher ranked types and other things which meant that implicit bindings were always very controlled – bindings always had to go in the same extremely predicitable places which meant it really wasn’t worth being explicit.

However, since then we’ve lifted those restrictions, and have most/all of System F (depending on which “System F”). Yes, you are right we have things beyond System F, but that doesn’t weaken the argument. System F and MLTT (the O.G. “dependent types”) both have explicit bindings: with System F, Big Lambda is carefully crafted to look the same way as Little Lambda; with MLTT the two lambdas are combined into one so scoping has to work the same way. There are other things we’ve added like constraints that don’t appear in either, but they don’t change the picture; they are explicitly bound too.

The basic problem as I see is rather than leaping between two “stable design points” (no bindings, unification everywhere) (explicit bindings, no implicit binding, unification more optional), is that we’ve creeped and crawled in baby steps with many language extensions, and landed in this blurry midway point (e.g. with -XScopedTypeVariables) that isn’t really in either camp, and isn’t so coherent.

#448 (with or without my additions) is trying to incrementally get us out of this situation, but what a few us have been thinking is ultimately we probably want to move away from 2 ^ n language extensions, to something which is more coarse grained and geared towards best representing these stable design points — e.g. you have to pick Haskell 98 land where you loose expressive power but things like the order of forall-bound variables are guaranteed not to matter, or you pick full power but more explicitness required mode, where things like that do matter, but also ambiguities that might otherwise arise are made impossible. There is a lot of nuance on how to actually execute on this while being backwards compatible — it will be a long and delicate road getting there ---- but I think it is a good (possible) destination to keep in mind.

3 Likes

The @ is supposed to indicate “optional”, or as we call it “invisible”. Long term, it may become the case that we can have both invisible term arguments (e.g. @5, but the 5 could be inferred) and visible / mandatory type arguments (e.g. sizeOf Int, avoiding the nasty -XAllowAmbiguousTypes).

So to answer your questions:

Yes

The fully explicit type signature, which you would get from GHCi, is:

forall a. a -> a 

the forall binds the variable in the signature, and the @ binds the variable in the term; foralls and @s (once we are being explicit) always line up 1-for-1!

Fair enough, but we write Haskell. System F syntax or notation is somehow irrelevant for the non type theorist.

Just to make that last part explicit, the fully explicit term is something like:

(\ @(b :: Type) (x :: b) -> x :: b) :: forall (a :: Type). a -> a

I’ve added all the signatures I’ve possibly can, and used two separate identifier to represent the fact that the a and b and syntactically unrelated (they just happen to mean the same thing, but that’s semantics).


I forgot to quote, I was responding to

I thought you explicitly interested in the concepts / theory and I wanted to be clear this was something that does matter for the user, it isn’t just an implementation detail.

1 Like

Ok got it.

I have one more question then. If the signature is forall a. a -> a then the previous code should be strictly equivalent to (x -> x) :: forall a. a -> a. So will both version accept the optional parameter even though it doesn’t appear anywhere in the second version ?

I am interested in the concepts / theory but not when I am actually programming (i.e. I don’t want to have write @ everywhere just to satisfy my theorist side).

That the bit I I still don’t understand but I’ll trust you on that. If it is necessary, then it is. I’m still voting for implicit parameter and I think you understand my reasoning.

1 Like

I think you meant (\x -> x) :: forall a. a -> a? Yes it is equivalent, one is just more “filled in”.

Not sure what you mean, do you mean \ @_ x -> x? That might be accepted in some contexts. There are some type checking questions unrelated to name resolution to work out, however.

I am interested in the concepts / theory but not when I am actually programming (i.e. I don’t want to have write @ everywhere just to satisfy my theorist side).

OK!

That the bit I I still don’t understand but I’ll trust you on that. If it is necessary, then it is. I’m still voting for implicit parameter and I think you understand my reasoning.

I just want to emphasize that optional != leaky implementation detail. There is no real upper bound to how much of your program a clever language implementation might be able to figure out / guess on it’s own. The point of this explicit stuff isn’t that GHC is lost and confused, but that the human might be lost an confused. Of course, this is a matter of opinion.

It is fine if you continue to like it :).

I was refering

and

id2 \ @a (x :: a) -> x :: a looks like it has 2 parameters. If I understand @tomjaguarpaw, correctly, it is a parameter. I can call it with 2 parameters (e.g. id2 @Int 3).

But the parameter is optional, so I don’t have to provide it: id2 3 works as well.
Fair enough, but id2 signature is only forall a . a -> a, which only have one parameter. How come a function can have 2 parameters (or 1 and half) yet is type signature show only one ?

If \ @a (x :: a) -> x :: a as type forall a. a -> a. Then it is not any different from id1 = \x -> x. id1 and id2 have the same type signature and same code. Therefore I should also be able to call id1 with 2 parameters (id1 @int 3) even though id1 never mentioned any optional parameter ?

What am I missing ?

forall a. is a parameter. To elaborate, the full type is forall (a :: Type). a -> a, which is why when calling id2 you first (optionally) apply a type, and then a value of that type.

Therefore I should also be able to call id1 with 2 parameters (id1 @int 3) even though id1 never mentioned any optional parameter ?

You can.

1 Like

Seems to me there’s a difference here between two schools of thinking. There’s an older school in which -> is a type constructor, => is syntax used to introduce a constraint, forall _. is a universal quantifier, and all of these things are distinct and unrelated ways to form types out of simpler types. The word ‘application’ is strictly used to denote a term combining something that can be subsumed by a type formed with the -> type constructor, and something that can be subsumed by the first argument to that type constructor. ‘Type applications’ are their own thing, analogous but conceptually separate. Instance resolution isn’t even spoken of in the same breath.

Then there’s a newer school, influenced if not driven by dependent type theory, in which ->, =>, forall _., and the prophesied forall _ -> are all ‘quantifiers’ with different properties like ‘visibility’ and ‘relevance’. Those properties determine whether and how one may provide a parameter when using a term of that type in order to eliminate the quantifier. For some combination of backwards-compatibility and convenience, the forall _. quantifier can be inferred if not explicitly present in a type, but regardless of whether it is inferred or explicit, it introduces a parameter (invisible, irrelevant) just like -> introduces a parameter (visible, relevant).

One doesn’t need to learn dependent type theory to switch gears into the new school (‘dependence’ is a property orthogonal to ‘visibility’ and ‘relevance’, AIUI), but it does represent a shift that was pretty surprising to me when I learned about it, and people educated under the old school might need some pointers when people who have adjusted to the new school use the word ‘parameter’ in its newer, more expansive sense.

Using @ to explicitly bind type parameters is definitely of a piece with the new school—all parameters are parameters, and the @ is just a visibility marker—and I support it, but I’m also not surprised that people who haven’t made the shift find it jarring.

3 Likes

@rhendric I think that’s exactly right. I want to make the more limitted-XPatternSignatures better for the “new school”, and I don’t think this should adversely harm the “old school” because they can just continue using -XScopedTypeVariables.

I think it needs older school [**]: -XPatternSignatures as was before -XScopedTypeVariables came along.

XScopedTypeVariables is a terrible extension because it makes explicit forall no longer a merely stylistic choice/its effect is exactly counter to your own rule 2. “the rules are strictly local, no outer scopes / enclosing code needs to be consulted …”. For nearly all extensions, merely switching them on does not change the meaning of an existing program: it enables/allows syntax and that’s what brings in the new meaning. (So @goldfirere makes a good point on the github thread that a warning/linter at the point of that new syntax would work as well. How’s a warning going to detect that the signature uses explicit forall and the introduced tyvars are used in the body of the function – being careful not to warn if they’re shadowed ?)

[**] Edit: Or perhaps I mean Scoped Type Variables as documented here – no mention of explicit forall.

I am only using the word parameter as a synonym for “function argument”. If parameter means something different then indeed it is a source of confusion. Is there somewhere a clear definition of “new school” parameter ?

Yes I think you might be the one person who doesn’t like -XScopedTypeVariables but does want pattern signatures with implicit binding.

Perhaps this deserves another poll.

The other definition is basically “anything that is a function parameter in core”. That includes “regular” term parameters, type parameters, and dictionary parameters. Once #448 is implemented, we can stop directly referring to Core, and instead refer to the “most explicit Haskell” I did above.

It makes total sense that this is foreign to people that having been following the proposal process / papers super closely, and instead built a mental model based on the Haskell that currently exists.

That sounds to me like implementation details and I am wondering if the “new school” point of view is only the view from the implementor lenses (should I know core to program in Haskell ?)

I asked the following question Is class dictionary passing the result of a implementation choice or theoritical necessity on reddit (to get a different audience and avoid any biais). If I am right this confirms that all of this is exposing implementation detail.

A non-Core-dependent way of viewing this is that the new school considers a parameter to be anything over which the function abstracts—any aspect of the function’s meaning that would be determined at the call site of the function, rather than at the definition of the function. That encompasses value parameters, of course, and it also pretty clearly covers dictionary parameters. It’s arguable whether this definition is enough to justify lumping type parameters in as well but I think it does for any given theoretical model of Haskell—either your theoretical model includes type abstractions like System F, in which case type parameters being parameters is a given, or your theoretical model doesn’t, which means your theoretical model doesn’t include polymorphism, which means there is no such thing in that theory as a function a -> a, only functions Int -> Int and Bool -> Bool. Which of those types the term \x -> x has, and thus what \x -> x ‘means’ (what valid object in your theory the term \x -> x corresponds to, in other words), is something that would be determined at the call site, so it’s a parameter.

1 Like

Thanks John, I think there’s plenty who don’t like -XScopedTypeVariables in its present form. (And indeed the design runs against more recent approaches with extensions of making each narrowly-focussed.) The trouble is it’s widely used, so how do we split out the parts each camp wants and keep happy those who will put up with the present form?

I seem to be not the only person who wants implicit binding; but the numbers are too small to be reliable.

Heh heh. We’re getting few enough straws in this poll. And I don’t have a proposal for where we go with dismantling -XScopedTypeVariables, so I’m not sure what the poll should be asking.

I’m comfortable with the idea the types are parameters as well. It’s more a case of:

a) how soon in their Haskell journey do we start facing learners with the idea of types (and constraints) as parameters? (There’s plenty of learner materials saying things like “the type system infers the correct types for us” [from the Gentle Intro].)

b) do all parameters have to be passed only in the form of positionally placed in a decl individually after the function being defined? IOW

f :: Num b => b -> b
f (x :: a) = x * 2

is passing the type as a parameter in the same way as the decl takes b as a parameter (with no explicit forall). Where is the dictionary-as-parameter in the LHS of f?

I expect a learner not to have a theoretical model – certainly not including dictionaries and types as parameters. Rather, in using Haskell at the term and declaration level, they acquire a theoretical model. (We hope they’ll be interested enough from the examples to read up on it. At least that’s how I got beyond lambda calculus as theory.)

If the only way to get learners ‘over the bump’ into type theory is by them having to learn implementation detail or abstract System F/Dependent typing, most of them will just walk away.

The Gentle Intro also says “Haskell’s type system is powerful enough to allow us to avoid writing any type signatures at all”. (It concedes there are exceptions, but let’s get a learner comfortable before we go into those.) So I in general don’t give signatures for let/where subfunctions like inner f in @jaror’s g, h above.

Nevertheless, judicious placement of type signatures … is a good idea, since type signatures are a very effective form of documentation … [Gentle Intro again]

  • Top level signatures, and especially from libraries fair enough.
  • What effective documentation value is explicit forall adding?
  • What effective documentation value is @... adding?
  • What effective documentation value are signatures in let/where adding?
  • Sometimes the answer is to disambiguate existentially quant’d tyvars – like what learners don’t need to worry about yet.
  • Sometimes the answer is because we’re passing type params in a non-obvious parameter sequence – ditto.
1 Like

I find that code heckofa confusing with s as both a type variable and term variable. Not least because I don’t know your types, so I don’t know if term var s is of type s. Even if so, punning of vars seems of dubious usefulness.

Since we’re talking PatternSigs, have you tried without the signature in the where, but with

shiftUsedBoxes isUsed (boxes :: [Box s']) inBucket'strategies = do
  ...
where 
        ...
        doSwap (source :: Box s', dest :: Box s') = do
                 ...

And possibly with a signature on RHS of doSwap. There I’m using fresh s' so not relying on the scoped tyvar s.

(I do think in this case I’d put shiftUsedBoxes @s' isUsed ... – that is when we get @... available in patterns – because that type param is ubiquitous.)

s as in ST s a monad. To simplify Box s is a STRef and Warehouse is a ST s.
All the ss are there to make sure that the boxes and shelves belongs to the same warehouse.

Not sure I understand what you mean.

It is actually a bad example, because apparently it compiles if I remove the type signatures and even the forall (I must have put it for a reason though). I’ll try to dig better one lol.