Option for writing signature for (.) in Prelude.Linear without DT

The (.) operator defined in linear-base has type signature:

(.) :: forall {rep} b (c :: TYPE rep) a q m n. (b %1 -> c) %q -> (a %1 -> b) %m -> a %n -> c 

which is not compatible with the one defined in Prelude. This is because the correct signature for (.) with multiplicity added is:

(.) :: (b %m -> c) -> (a %n -> b) -> (a %(m*n) -> c)

where m*n is a hypothetical product of multiplicities. In practice we don’t have dependent types.

Here’s an ad hoc option for writing the signature of this function without DT.

{-# LANGUAGE DataKinds, FlexibleInstances, LinearTypes, MultiParamTypeClasses, OverlappingInstances #-}
import GHC.Base

class ComposeMult m n p where
  (...) :: (b %m -> c) -> (a %n -> b) -> (a %p -> c)

instance {-# OVERLAPPING #-} ComposeMult One One p where
  {-# INLINE (...) #-}
  (...) g f x = g (f x)

instance {-# OVERLAPPABLE #-} ComposeMult m n Many where
  {-# INLINE (...) #-}
  (...) g f x = g (f x)

(.|) :: (b %1 -> c) -> (a %1 -> b) -> (a %1 -> c)
(.|) = (...)

That is, we can exploit instance resolution to at least get a polymorphic (...), that produces a linear function if both of the composed functions are known to be linear at use site of (...), and falls back to a regular function if in any doubt. This is enough for a lot of the use cases. Furthermore, it is likely that GHC will inline the typeclasses, or we can use specialization pragmas to manually eliminate the typeclasses.

2 Likes

I don’t think this has much to do with dependent types. The product of multiplicities could be implemented as a separate thing. It just requires a new GHC proposal and some implementation work.

3 Likes

That’s right. @aspiwack can comment on what makes the multiplicity semiring work/not work/why is it not m*n indeed. To be honest, I forget what exactly makes this not work at the moment so would be interested in seeing his answer.

1 Like

It’s difficult to deal with products (and sums) in typechecking because of a conjunction of reasons

  • Product of multiplicity would be a type family, so you’d need evidence for equality. Currently, the type checker fails when it needs coercion evidence to prove that a variable is used the right amount of times. That’s because we don’t know how to give this evidence to Core (and linearity is checked on the output of the desugarer). In fact there used to be one such type family defined. It doesn’t appear that we still have that, probably because we ended up being incapable of testing it.
  • Even if we could use evidence in multiplicity checks, to be able to use multiplicity products almost at all, we would need to be able to unify modulo multiplicity products. The full theory of multiplicities is that of a semiring, and unification up to semiring theory is undecidable (it’s not obvious). Even if we give up on addition and only use multiplication there are complications, because unification modulo a commutative semigroup doesn’t yield a single principal type (it does yield finitely many solutions though). Another approach is to assume that multiplicities are always 1 or Many and use some kind of sat solving to solve equations. There is one paper about type inference for linear Haskell, if memory serve, it’s basically their approach. But it doesn’t sound like something that is easy to implement in GHC (or that anyone would let me do, honestly).

I do have the intuition that the strategy that we chose, to only allow multiplicity variables which are introduced by the user and otherwise default every variable to Many could potentially simplify this problem, but I’ve never had the time to think about it properly.

I think we can think of @Void ‘s proposal as working around the issue by giving up on a functional semantics for product and using a relational semantics instead. Maybe it’s a reasonable compromise.

3 Likes

Why not just some built in type operator?

Isn’t it the same for rank >2 types? We do already have those and they (mostly) work fine.

2 Likes

One option is boolean unification (with unification variables and constants only), which both completely decidable and unitary, meaning it has principle types. This, just like your last point, requires One and Many to be the only multiplicities.

Let me show of a working example of @Void’s formulation in Aith, my toy language. My typecheck is able to validate this:


inline compose<
        A:metatype,
        B:metatype,
        C:metatype,
        M:multiplicity,
        N:multiplicity
> : (B -M C) -* (A -N B) -* A -(M | N) C
  = \(f :* B -M C)
 -> \(g :* A -N B)
 -> \(x :^(M | N) A)
 -> f {g {x}};

This also typechecks even if you elide the type and multiplies of the arguments.


inline compose<
        A:metatype,
        B:metatype,
        C:metatype,
        M:multiplicity,
        N:multiplicity
> : (B -M C) -* (A -N B) -* A -(M | N) C
  = \(f :* _)
 -> \(g :* _)
 -> \(x :^_ _)
 -> f {g {x}};

Lastly, just for fun, here’s the principle type my compiler spits out:

inline compose<
        A:multiplicity,
        B:metatype,
        C:metatype,
        D:metatype,
        E:multiplicity,
        F:multiplicity,
        G:multiplicity,
        H:multiplicity
> = \(f :^A D -(F & G (+) F & G & H (+) F & H) C)
 -> \(g :^(E (+) E & F & G (+) E & F & G & H (+) E & F & H (+) F & G (+) F & G & H (+) F & H) B -H D)
 -> \(x :^(G (+) G & H (+) H) B)
 -> f@_ {g@_ {x}};
1 Like

Sorry to get back to you after so long (I let this be buried in my emails, I only saw it again now).

By boolean unification you do you mean unification in the theory of boolean algebras? Exploiting the fact that on two element sets every (endo)function can be expressed as a combination of boolean operations (which I suppose would mean that this technique can’t be extended to more than two multiplicities)? Or is it something else entirely?

Yes, exactly what I meant. Boolean unification is a type of E-unification which (if limited to variables and constants) has very nice properties like being decidable and having most general unifiers.

The unification is limited to a set of two elements, as you mentioned, but I’m not convinced that would mean that zero multiplicity is impossible. I think if you parameterize functions with two multiplicities, one for the linearity, and for the erasure, you could effectively have the three state that Idris has.

Let me draft an example:

data Multiplicity = One | Many
data Erasure = Erased | Concrete

type Normal = Int %Many %Concrete -> Int
type Linear = Int %One %Concrete -> Int
type Evidence = Int %Many %Erased -> Int
type Degenerate = Int %One %Erased -> Int

I should note, I haven’t really fleshed out this idea yet, but it seems like it should be possible though.

I’ve been coming back to this every few days for two weeks. Every time I thought: wait, here’s another idea, and I didn’t answer because I wanted to think about it some more. I should reply now, even if I don’t necessarily have a clearer idea of what’s going on.

Independent of whether what you’re proposing is a good idea, it’s a different semiring than the usual {0, 1, Many}. That being said, one way to think about your could be: any finite set can be embedded in the product of n 2-element boolean algebras. And then you can perform unification on them to compute modulo any function. The only issue is that you may end up inferring junk types. Which may not end up being quite convenient, I don’t know.

Another thought I had was that, in the case of linearity and similar properties (0, affine, relevant), the multiplicities are all their own squares. So the semiring looks like a Boolean ring which, of course, enjoys the same unification properties than Boolean algebras. Except that, really, a “Boolean” semiring like multiplicities is actually very different from a Boolean ring. Turns out that the additive inverse is really important to the properties of semirings.

There is a Boolean algebra hidden in multiplicities, though. Which is the powerset of ℕ. With linear corresponding to {1}, 0 to {0}, Many to ℕ, affine to {0,1}. Multiplication is union. But unfortunately, addition (which is liftA2 (+)) doesn’t seem definable from the Boolean algebra operations.

So anyway… I don’t know what’s a good idea. But it’s certainly fun to ponder.

2 Likes