Sneak Peek: Bolt Math

Fixing the Num hierarchy for Great Good

The Haskell numerical hierarchy is now dust, scattered to the winds.

It should come as no surprise to anyone following my work, that if I am complaining about something, it is probably because I am working on fixing it. There is very little point to moaning about something if you aren’t willing to do something about it. One of the things that I have complained about most, is the Num hierarchy, and so *snap*.

The deconstruction of the Bits class such that it butts up against the bottom of the Num hierarchy just begs for unification, and the game engine that I am slowly but inevitably building with all of these things (math, memory, cryptography) requires a level mathematical precision that the base does not allow. If you’ve seen my postings about not just Boolean but Logic and Lattice, well, this is where that has been leading.

Here goes: I am giving Haskell the math library that it deserves.

The problem

It has always bothered me that Haskell, supposedly one of the best programming languages, has what is hands-down almost objectively one of the worst built-in math libraries.

Take, for example, that:

  • syntactic operators are defined as part of critical typeclass definitions
  • Literal support and conversion requires full arithmetic implementations
  • we have three different power operators - ^, ^^, and **
  • the definition of Num / Fractional fundamentally disallow instances for vectors and polynomials and etc
  • It is impossible to support things like symbolic math

This causes heaps of problems, ranging from subtle things like having partial Num implementations just to access fromInteger, to more obviously destructive effects, such as the explosion of operators in libraries like linear. Implementing things is a pain in the arse - have you memorized the hierarchy, can you tell me what class fromRational is defined in?


There are two major distinct issues that cause most of these problems, and they can be solved with the same resolution - they are the primary causes of the issues in the Num hierarchy, with most other effects being secondary to one of them:

  • Literal syntactic support being tied to numeric / ring behavior
  • The implicit assumption of homogenous operations in a field (hah) that is often heterogenous

The former is an issue because I shouldn’t have to implement (+), (-), (*), negate, abs, signum just to get access to fromInteger for a type that is representable as an integer but fundamentally isn’t one, eg bit flags and mask enums, and the latter because it ignores heterogenous math like vector-scalar operations, and how exponentiation actually results in complex numbers (remind me, what is the square root of -1 again?).


Now, there is a explanation for all of this - the numerical hierarchy was one of the first things written, way back in the before times, and everything depends on it - it literally predates the modern type system capabilities. But an explanation isn’t a valid reason to not to improve things, so I’ve gone and done something about that.

It’s not finished yet, but it is time for a sneak peek at the new mathematical hierarchy, that allows for heterogenous math and the consolidation of a great number of many operators into single symbols.

Literal expressions

To start, literals have been separated from actual numeric requirements, to allow for numeric literals to be used for other things, and to eg allow for symbolic math, bit flags, things that can be represented as a number, but don’t support addition or multiplication or such.

-- Integer literals

class ExpressibleByIntegerLiteral a where
    fromIntegerLiteral :: Integer -> a

class
    (    ExpressibleByIntegerLiteral a
    ) => ConvertibleToIntegerLiteral a where
    toIntegerLiteral :: a -> Integer

-- Rational literals

class ExpressibleByRationalLiteral a where
    fromRationalLiteral :: Rational -> a

class
    (    ExpressibleByRationalLiteral a
    ) => ConvertibleToRationalLiteral a where
    toRationalLiteral :: a -> Rational

-- String literals

class ExpressibleByStringLiteral a where
    fromStringLiteral :: String -> a

class
    (    ExpressibleByStringLiteral a
    ) => ConvertibleToStringLiteral a where
    toStringLiteral :: a -> String

-- List literals TBD

These will be used with RebindableSyntax later.

Arithmetic

Arithmetic operations have been split off, and follow a specific sub-hierarchy:

HeterogenousOperation =>
    OperativeSemigroup =>
        OperativeMonoid =>
            OperativeGroup 

One of the key things I have instituted is that the heterogenous operations have more colloquial names, and the homogenous operatives with associated properties have the more mathematical nomenclature - this pattern is followed library-wide

Addition and subtraction

class Addition a b where
    type Sum a b :: Type
    plus :: a -> b -> Sum a b

class (Addition a a, Sum a a ~ a) => AdditiveSemigroup a where
    add :: a -> a -> a
    add = plus

class AdditiveSemigroup a => AdditiveMonoid a where
    zero :: a

class Subtraction a b where
    type Delta a b :: Type
    minus :: a -> b -> Delta a b

type SubtractiveMagma a = (Subtraction a a, Delta a a ~ a)

class (AdditiveMonoid a, SubtractiveMagma => AdditiveGroup a where
    neg :: a -> a
    sub :: a -> a -> a
    sub = minus

Multiplication and division

Note that euclidean division is separate from fractional division, and that idiv is differentiated from div, although div may be implemented as idiv for eg Integrals.

class Multiplication a b where
    type Product a b :: Type
    times :: a -> b -> Product a b

class Division a b where
    type Ratio a b :: Type
    divides :: a -> b -> Ratio a b

type DivisiveMagma a = (Division a a, Ratio a a ~ a)

-- NOTE: Am probably going to make this heterogenous too
class (Division a a) => EuclideanDivision a where

    type Quotient a :: Type
    type Remainder a :: Type
    quotRem :: a -> a -> (Quotient a, Remainder a)
    quot    :: a -> a -> Quotient a
    rem     :: a -> a -> Remainder a
    idivMod  :: a -> a -> (Quotient a, Remainder a)
    idiv     :: a -> a -> Quotient a
    mod     :: a -> a -> Remainder a

class (Multiplication a a, Product a a ~ a) => MultiplicativeSemigroup a where
    mul :: a -> a -> a
    mul = times

class MultiplicativeSemigroup a => MultiplicativeMonoid a where
    one :: a

class (MultiplicativeMonoid a, DivisiveMagma a) => MultiplicativeGroup a where
    recip :: a -> a
    div :: a -> a -> a
    div = divides

Exponentiation, Logarithms, and Radication

These are currently in-progress due to the complicated nature of exponentiation being an arithmetic operation but logarithms being a transcendental operation. There is a subtle difference between ‘exponentiation’ and ‘exponential fields’.

class Exponentiation a b where
    type Power a b :: Type
    pow :: a -> b -> Power a b

class (Exponentiation a b, Power a b ~ a, Integral b) => IntegralPower a b where
    ipow :: a -> b -> Power a b

class (Floating a, Exponentiation a a, Power a a ~ a) => FloatingPower a where
    fpow :: a -> a -> Power a a

-- TBD - radication as the arithmetic

Rebindable syntax

You will probably want to use RebindableSyntax with this library, and to import the respective module that provides:

import Prelude (Eq(..), Ord(..), (>>=), (>>), fail)
fromInteger :: ExpressibleByIntegerLiteral a => Integer -> a
fromInteger = fromIntegerLiteral

fromRational :: ExpressibleByRationalLiteral a => Rational -> a
fromRational = fromRationalLiteral

fromString :: ExpressibleByStringLiteral a => String -> a
fromString = fromStringLiteral

negate :: (AdditiveGroup a) => a -> a
negate = neg
{-# INLINE negate #-}

ifThenElse :: Bool -> a -> a -> a
ifThenElse c t f = if c then t else f
{-# INLINE ifThenElse #-}

This module is incredibly useful, as we begin hijacking Haskell’s syntax to shim our own classes, thus making 0 :: Int call fromIntegerLiteral from our ExpressibleByIntegerLiteral classes, bypassing Num entirely.

Operators

Operators have been defined in their own module to allow for ease of selection.

Suites of both heterogenous and homogenous operators are provided - here are the heterogenous operators for example:

infixr 8  ^
infixl 7  *, /
infixl 6  +, -

(+) :: (Addition a b) => a -> b -> Sum a b
a + b = plus a b

(-) :: (Subtraction a b) => a -> b -> Delta a b
a - b = minus a b

(*) :: (Multiplication a b) => a -> b -> Product a b
a * b = times a b

(/) :: (Division a b) => a -> b -> Ratio a b
a / b = divides a b

(^) :: (Exponentiation a b) => a -> b -> Power a b
a ^ b = pow a b

Between rebindable syntax and operators, we can now use standard x + y syntax. Now, type families do inhibit type inference, and so the heterogenous operators do not play well with un-typed literals eg x + 0 will probably complain that it doesn’t know what type 0 is since heterogenous math means it can’t infer it from x - but if that is a problem, you can use the homogenous operators, which are a lot better in that regard

Rings and Fields

These are just convenient type aliases that really help make constraint declarations smaller.

-- Proper naturals (lacking zero) are only semirings
type Semiring a = (AdditiveSemigroup a, MultiplicativeSemigroup a)

-- Integers are only rings
type Ring a = (AdditiveGroup a, MultiplicativeMonoid a)

-- Reals are a field
-- Technically, a skew field or division ring - apparently there is very little
-- difference, and are the exact same thing for finite rings / fields
type NonZero a = a -- A hack for now
type Field a = (Ring a, MultiplicativeGroup (NonZero a))

-- TODO: Include OrderedField, etc

Logarithmic

Logarithms / the exponential field is part of the transcendentals

-- NOTE: I may split into Exponential and Logarithmic
class (Field a, FloatingPower a) => Exponential a where

    exp :: a -> a

    log :: a -> a

    logBase :: a -> a -> a
    logBase b x = div (log x) (log b)

Trigonometry

Also part of the transcendentals.

class (Field a) => Trigonometric a where

    pi :: a

    sin :: a -> a
    cos :: a -> a
    tan :: a -> a

    asin :: a -> a
    acos :: a -> a
    atan :: a -> a

    atan2 :: a -> a -> a

Epsilon

class Epsilon a where
    epsilon :: a
    -- Near with absolute error
    nearEq  :: a -> a -> Bool
    -- Near with relative error
    near    :: a -> a -> Bool

Signs for number lines

These are prequisites for defining our numeric types.

The ‘signed’ class is as close a definition to the ‘reals’ as I can get, when combined with IsReal later down the line.


class (Ord a) => Signed a where

    type Sign a :: Type -- eg Pos | Zero | Neg, but extendable
    -- See: UniversalSign

    sign :: a -> Sign a

    abs    :: a -> a
    signum :: a -> a

    isNegative :: a -> Bool
    isZero :: a -> Bool
    isPositive :: a -> Bool

    -- NOTE: This are here instead of in their respective classes, to allow for
    -- the classes to support the actual values

    isEpsilon :: a -> Bool
    isInfinite :: a -> Bool
    isNegativeZero :: (Signed a) => a -> Bool
    isPositiveZero :: (Signed a) => a -> Bool
    isNegativeInfinite :: (Signed a) => a -> Bool
    isPositiveInfinite :: (Signed a) => a -> Bool
    isNegativeEpsilon :: (Signed a) => a -> Bool
    isPositiveEpsilon :: (Signed a) => a -> Bool

data SimpleSign = Pos | Zero | Neg

data UniversalSign
    = NegativeInfinity | Negative    | NegativeEpsilon
    | NegativeZero     | NeutralZero | PositiveZero
    | PositiveEpsilon  | Positive    | PositiveInfinity

-- NOTE: These have to be classes because you can have signed numbers with
-- unsigned infinity

-- Projectively extended reals
class (Signed a) => Infinity a where
    infinity :: a

-- Hyperreals
-- NOTE: Infinitesimals are implicitly signed due to their definition as being
-- in the range between -epsilon and +epsilon
class (Signed a) => Infinitesimal a where
    infinitesimal :: a
    negativeInfinitesimal :: a

-- Extended reals if + Infinitesimal
class (Infinity a) => SignedInfinity a where

    positiveInfinity :: a
    positiveInfinity = infinity

    negativeInfinity :: a

type Surreal a = (SignedInfinity a, Infinitesimal a)

-- Don't forget your NaN!
class NaN a where
    isNaN :: a -> Bool

Yeah, you read that right - this library is powerful enough to support hyperreals and extended reals without any special requirements.

Rounding

Ironically, also a prequisite for defining our numeric tpyes

-- TBD - some sort of `Integral (Whole a)` constraint probably
class ProperFraction a where

    type Whole a :: Type
    -- TODO: Maybe Fraction a?

    properFraction :: a -> (Whole a, {- Fraction -} a)

    whole :: a -> Whole a
    whole = fst . properFraction

    fraction :: a -> a
    fraction = snd . properFraction

class (ProperFraction a) => Rounded a where
    truncate :: a -> Whole a -- Towards zero
    round    :: a -> Whole a -- Towards nearest whole

class (Rounded a) => DirectedRounded a where
    ceiling  :: a -> Whole a -- Towards positive infinity
    floor    :: a -> Whole a -- Towards negative infinity

Finally, Numbers

We now get to define the original hierarchy, stripped of non-essentials, with minor nomenclatural differences.

class (Signed a, Semiring a) => IsNumber a where
    fromInteger :: Integer -> a

-- NOTE: If we rename this to IsRational, Signed can be renamed Real?
class (IsNumber a, Ord a) => IsReal a where
    toRational :: a -> Rational

-- NOTE: No IsWhole class

class (IsReal a) => IsNatural a where
    fromNatural :: Natural -> a

class (IsNatural a, Ring a) => IsIntegral a where
    toInteger :: a -> Integer

class (IsNumber a, Field a) => IsFractional a where
    fromRational :: Rational -> a

-- I am still pondering these two things:
-- class (IsIntegral a, IsFractional a) => IsRational a where
type IsRealFrac a = (IsReal a, DirectedRounded a {- , ProperFraction a -})

-- NOTE: This could be cleaned up further but it is not critical
-- NOTE: NaN, infinites negative zeroes are now handled by Signed
class (IsRealFrac a, Floating a) => IsFloating a where
    
    floatRadix :: a -> Integer
    floatDigits :: a -> Int
    floatRange :: a -> (Int, Int)
    decodeFloat :: a -> (Integer, Int)
    encodeFloat :: Integer -> Int -> a
    exponent :: a -> Int
    significand :: a -> a
    scaleFloat :: Int -> a -> a
    isDenormalized :: a -> Bool
    isIEEE :: a -> Bool

But why?

So I can do this:

class
    ( AdditiveGroup v
    , Field (Scalar v)
    ) => VectorSpace v where

    type Scalar v :: Type

    scale :: Scalar v -> v -> v
    default scale :: (VectorScalarMul v) => Scalar v -> v -> v
    scale = flip times

mulS :: (VectorSpace v, VectorScalarMul v) => v -> Scalar v -> v
mulS = times

divS :: (VectorSpace v, VectorScalarDiv v) => v -> Scalar v -> v
divS = divides

type VectorScalarAdd v =
    (Addition v (Scalar v), Sum v (Scalar v) ~ v)

type VectorScalarSub v =
    (Subtraction v (Scalar v), Delta v (Scalar v) ~ v)

type VectorScalarMul v =
    (Multiplication v (Scalar v), Product v (Scalar v) ~ v)

type VectorScalarDiv v =
    (Division v (Scalar v), Ratio v (Scalar v) ~ v)

Oh! OH! Now I can do V3 1 2 3 * 4, but 1 * 2 still works, with the same operator, due to our heterogenous math. This is a massive improvement over libraries like linear that require using the following operators (and most other vector libraries follow similar conventions):

  • (^+^)
  • (^-^)
  • (^*)
  • (*^)
  • (^/)
  • (.-.)
  • (.+^)
  • (.-^)

Do you know what these do off the top of your head? I don’t! But we don’t have to - we get to use +, -, *, /, and ^.

Not only that, but because we defined heterogenous operations, it is also trivial to make instances for addition and subtraction of affine points with vectors, which is ALSO heterogenous.

Again, we just use plus and minus and stuff:

class
    ( VectorSpace (Diff p)
    ) => AffineSpace p where

    type Diff p :: Type

    translate :: p -> Diff p -> p -- alt name: offset
    default translate :: (Addition p (Diff p), Sum p (Diff p) ~ p) => p -> Diff p -> p
    translate = plus

    diff :: p -> p -> Diff p
    default diff :: (Subtraction p p, Delta p p ~ Diff p) => p -> p -> Diff p
    diff = minus

Coming Soon: The Full Geometric Suite

What I have described so far, is only about half of what is coming down the pipeline; I have a full implementation of geometric algebra, with support for quadratic, inner, normed exterior, dual, projective, and conformal spaces, including grades blades and multivectors, all of which is only made possible by the deconstruction of the original numerical hierarchy.


I know this has been a huge code splat with very little context, so if you are still with me, thank you for reading! I’ll try to get an actual library pushed out sooner rather than later (I need it for memalloc, which I need for botan, which I need for my game engine! Vroom!)

32 Likes

Incredible, love it!

Have you heard of SubHask? It was exploring this space quite a while ago. Never really amounted to anything as far as I know, but there may be some interesting stuff to learn from it.

4 Likes

The way to my heart – effortless broadcasting over multidimensional arrays! Very cool

6 Likes

Extant examples are always a good resource :blush: for comparison, to learn from, and to help make sure I am covering all of the necessary concepts; putting this all together in an ergonomic format has been a challenge, and I can imagine this difficulty might have contributed to subhask’s resulting lack of use.


Edit: There is also numeric-prelude as prior art, which looks so good until you click a module and see it is all class C a => C a where and I am instantly like :melting_face:

4 Likes

Exciting! I’ve also been bothered by this. It would be nice to see how this interacts with these conversations:

4 Likes

I like the cut of your jib.

I’ve also worked along similar lines…
The linear issue you stated about the need for so many “operators”, also concerned me too.
My solution was not to try and change the Num Class, but to implement a different algebra.
I developed cl3 to remove the issues with linear. But, the main concerns I had was with IEEE 754 floating point numbers, so I developed a library based on the Posit Standard, posit. I then combined both efforts to make cl3-posit, to sketch out an arbitrary precision Clifford Algebra of Physical Space.

I’ve been admiring your efforts on Memory abstractions, posit currently depends on data-dword, if your efforts result in an arbitrary precision 2’s Complement Integer, I would happily be an early adopter.

Thanks, N.W.

4 Likes

We use numhask at work, which handles the vector space stuff pretty nicely with its *Action classes. I think it does pretty much everything you outline in the post?

3 Likes

Intersting read, thank you; the discrepancies of the Prelude Num class hierarchy have always bothered me!

May I suggest renaming the divides class function? “a divides b” means that a divides b without a reminder. Maybe divide or divideBy?

2 Likes

I mean, considering my ExpressibleBy and ConvertibleTo classes, I am definitely in support of the first inasmuch as I think that it could be even better (Where’s my fromNatural? and my fromWhole, assuming we distinguish Natural and Whole numbers? I would like to support more literal types too, and would eg give booleans, 0 and 1 specific support) , and I think am fairly orthogonal to the second (but I think support it).

Ahoy, matey!

And yes, that is on the table somewhere, if I can fuse my improved bits to the new num stack (which they are designed to do so).

I am having a bit of a complex reaction to discovering this package. This is a little bit like landing my homemade rocket on the moon and getting out to see someone waving at me from their own rocket. It is a pity that Numhask managed to elude my notice until now, because it looks good enough that I almost might not have bothered writing my own! But only maybe…

On one hand, it is a math library, so of course it does the same things - if it did something different, I’d be very concerned! On the other hand, there is a fascinating amount of convergent evolution and it validates several of the design choices that I made, including some things I left out because I was still mulling them over.

I will have to spend some time dissecting numhask for my own benefit :slight_smile: but I think I prefer my own foundation lain on heterogenous math, and its totally a personal opinion but I still consider defining symbol operators as part of typeclasses to be a crime against interoperability.

I think, let competition breed quality. Also, the upcoming geometric algebra portion will include things that numhask lacks - like wedge products and multivectors, transdimensional operations where heterogenous support is not just suggested, but required.

Plus, there are two types of libraries - those we write for ourselves to achieve our own goals that we allow others to use, and those we write specifically for use by other people. My library falls more firmly into the latter; I want this to be approachable and friendly to learners, with a specific goal of game development - hence a large focus on starting with soft / colloquial terminology and increasing in specificity.

While numhask is easily parseable by someone comfortable with advanced math such as us, it isn’t quite so approachable for lesser mortals. Gotta make it easy to climb the ivory tower, y’know?

Nice catch! I suspect I automatically pluralized it because everything else ended in -s >_<

I actually need to ponder this seriously (I am not kidding about being particular about terminology to be approachable). How do you feel about divided (as another possibility)? Then divides = flip divided, the -By can be implicit which reserves explicit dividedBy to match the Haskell colloquialism used in eg sortBy etc.

2 Likes

Sadly, this was a beautiful idea cut down by Haddock before it had a chance to take root. The idea is to go all in on qualified imports. If Algebra.Field exports a class representing fields, what are you gonna call it? Algebra.Field.Field? If you’re always gonna import qualified anyway, there’s no ambiguity in just calling it Algebra.Field.C!

But then Haddock strips all qualifications and you’re left with nonsense when reading docs. :melting_face:

(The idea might have never worked anyway since it forces users to use qualified imports, but I can imagine a world where that became the norm.)

7 Likes

I also have a pie-in-the-sky shopping list of algebraic classes. I think you have finessed yours in a lot of ways that I haven’t, but there might be things in there that you can borrow.

Some thoughts about your design:

  • I would strongly encourage you to name the classes to match existing Haskell patterns, to minimise the disruption to existing practitioners, and to prevent newcomers from getting stuck in your own little custom universe. This heuristic leads me to suggest FromInteger over ExpressibleByIntegerLiteral, for example. I’d also be careful how you name the ConvertibleTo classes because unlike the ExpressibleBy/From classes, you don’t actually have a source file containing a literal that’s being parsed, and e.g. both Integer and Rational can generate values with arbitrarily large textual representations.
  • I’d extend this heuristic to using the real mathematical names for things, which I think is fine if you give them really good haddocks and thorough motivations and examples. Package acts is a good example of this, but we can go even further. Although I wasn’t there when it was defined, I consider the historical weirdness of Num a result of not shopping from the algebraic catalogue.
  • I can see why you’d want to put each arithmetic operation into its own class, but if you’re going to do that I’d consider putting the traditional operators in there.
  • I have baked in the assumption that my operations are all closed, whereas you have chosen to define associated type families for the result types. I recommend checking whether type inference is still good enough in your design. Not being able to reason from the type of a result of basic algebraic operation back through to the argument types could force a lot of manual type signatures in intermediate lets in complex numeric code, ballooning contexts in your type signatures, or other syntactic noise.
  • I think you’ve done a lot more thinking about where the miscellaneous numeric functions need to go, which I’ve glossed over in my post. Good.
  • Both of our sketches have a real issue with law-only typeclasses. They’re dangerous for inference (your code infers a signature that doesn’t include a class whose law you rely on) and AFAIK nobody has a thorough solution yet.
  • With these large class hierarchies, there’s also a real problem asking people to implement/derive the entire hierarchy for their types. Again, I don’t think anyone’s got a full answer yet, especially when one wants to add a new class to the middle of the hierarchy.
  • You might need equivalents to stimes and mtimes which can do repeated squaring for efficiency.
5 Likes

I like a lot of the ideas here. But I’m also a bit worried for people reading complex code using this exact design because I’ve done a similar API before and it caused some issues.

The big pain point is that you have some lawless classes purely for operator overloading and then subclasses which add laws. This can make reading code incredibly annoying guesswork because you have to do class resolution in your head to figure out which laws apply. At least that happens to me, maybe others are smart enough to manage. When writing code I regularly forget to add the right bounds which cause subtle bugs, the language server can’t manage constraints automatically, and especially verbose/selfdesriptive class names can become quite painful when typed over and over.

You also already mentioned that lack of type inference can be painful, but I think the bigger pain is again during code reading. You gotta run type inference in your head, and now that’s also context dependent on the type class constraints in scope.

At a minimum I’d give homogeneous variants of the operators. When starting with a very loosely typed API I usually end up using a tightly typed wrappers 90+% of the time, and when the freely typed powerful version gets used that’s a warning signal to code readers that more complex typing is happening.
Also, maybe have bidirectional constraints? E.g. for a + b == c, you currently have a b → c, but adding a c → b and b c → a as required fundeps can help with reading and inference.

2 Likes

I have had to convalesce for a few days (expending energy on personal matters, nothing to worry about) so I have not been able to respond to the recent comments yet, but in my absence I don’t want everyone to go down a rabbit hole of worrying - so I will say this, to give you some idea as to my thought process:

  • Keep the feedback coming - this is a sneak peak, so these concerns are precisely what I want to hear, and after all, this thread is a negotiation of the API contract with its prospective userbase, no? I will still be making my decisions, but I want them to be informed and deliberate decisions
  • It is easy to narrow a heterogenous interface by restricting it, but difficult to expand a homogenous one in the other direction, so for this reason I think the heterogenous foundation is necessary
    • There is a suite of homogenous operators, which is something I think some people may have missed, as I only mentioned it in text and did not paste the code.
  • Fundeps require propagating those arrows through subclasses which becomes rather untenable at scale - I don’t want to end up with typeclasses with 10 type parameters, I think that’d be worse :grimacing:
  • Definitely, a prime concern is people are worried about leaky laws - I am glad to know this, because I also have some law typeclasses / constraints related stuff (well, more like 3 different approaches that I am still toying with) that I didn’t want to distract from what I had, but it sounds as though there is interest so I shall make sure to include some more details on this next time
  • Regarding the requirement of large hierarchy implementations, that has been driving me towards some design changes that alleviate that particular issue - a better / more thorough breakdown of the root class / functions seems to be effective along with appropriate choices of default implementations - here the heterogenous / homogenous separation has actually been incredibly valuable towards making that happen

Something worth noting on its own, I have found a large amount of success with very specific design pattern to solve a very specific problem:

  • What do we do about mutually definable type classes - eg, the typeclass equivalent of {-# MINIMAL ... #-}?

Math is filled with them - for example: lattices vs boolean algebra vs logic, it is the same exact 5 functions, but in 3 different domains, and any one implements the other two). Anyway, that is the problem, and the solution has been to leave the classes as completely distinct but to choose an appropriate default definition from one of the other typeclasses, such that you still need eg all 3 typeclass instances, but only one has a definition and the other two are empty because we get them for free. The result in practice is that implementing the large hierarchy requires very little actual implementation, because the chosen defaults propagate nicely.

However, type errors can be quite horrible, and specifically:

It does, and I absolutely agree; this library makes heavy use of AllowAmbiguousTypes and UndecideableInstances (safely though!), but this means chaining ambiguous things together requires type annotation - but I think that this is the cost, it sort of derives from the same reasons why mathematicians define things the way they do. Annoyingly, for higher level concepts like grades and blades and wedges, type annotation is not optional (eg dimensionally-annotated types and functions such as grade @1 @1), and many critical operations are heterogenous by nature (eg, wedge :: v -> v -> Bivector v) or downright polymorphic (because also wedge :: v -> Bivector v -> Trivector v), and that sort of makes the argument a moot point because it would require manually implementing an infinite number of classes eg instead of a single wedge :: Grade i v -> Grade j v -> Grade (i + j) v (and I’m being real nice here not bringing up multivectors here because would you like even more definitions of wedge?)

Luckily, as mentioned before, it is easier to ‘seal’ a heterogenous function with a subclass to constraint it, and, if using the homogenous functions or operators, it does only require one annotation to propagate - so I think the solution is, I keep the heterogenous foundation, but do more work to promote use of the homogenous operators in the contexts where they do apply. If I can do the messy grunt work of linking together heterogenous ops, then only implementers need deal with the rough stuff, leaving a clean surface interface for the users. Like, for instance, I provide a convenient.


So! Lawlessness, and type inference is bumped up in priority. Keep the feedback coming!

2 Likes

I guess I’m not entirely convinced by your argument that you need a heterogeneous foundation? Many of the mathematical structures we work with are homogenous, with a few exceptions. And we usually describe non-homoegeonous operations as actions! I think numhask does it quite nicely: Additive is homogeneous, and for the non-homogeneous case you have AdditiveAction.

To give an example, we work a lot with typed unit values using the units package, which are ultimately newtypes over Double. A typed quantity like Length is additive, but has a MultiplicativeAction with Double, so you can do mm 2 *. 4 and so on. And in the same way we can use linear and give vectors actions over Double and so on. It works pretty well in practice.

I’d be interested to see an example of where you need non-homogeneousness that isn’t cleanly describable as an action?

4 Likes

I am not saying that that we can’t benefit from describing things as acts; and maybe even that is how we end up ‘sealing’ some things away. But, the thing I said about parallel concepts, applies here to how we handle this problem too - I think perhaps we can quite easily overlay said classes over the heterogenous definition, and that they are not necessarily mutually exclusive.

Wedge product, geometric product, really anything involving projecting across dimensions. Its not even that it is heterogenous, its that they become (I dont know how to put this formally) polymorphically heterogenous and homogenous at the same time. What does that even mean? I mean vectors are blades are grades are multivectors and I can wedge any of them together, because they are hypothetically subsets of the same construct, but the homogenous representation (multivectors) requires 2^n components, most of which will be empty and useless, infrequently used, and slow when it is used. Also, despite being ‘the same thing’ they are all also their own thing with distinct properties and uses (vectors volumes rotors and motors are all quite different things, even if they are ‘representable’ as a multivector).

Heterogenous types actually let me use type families to differentiate and automatically dispatch to a fast implementation - so when I have 10 dimensions in double conformal space, I don’t need to lift things to 2^10 = 1024-dimensional vectors. Using acts doesn’t solve this I think - I’d need to define an act instance for every combo of every dimension of blade and grade and multivector. So, acts are orthogonal to some aspects of the problems I am trying to solve, while heterogenity is not!

  • I was surprised by this too, I had actually been trying to improve Num for years with no meaningful movement forward - even using acts (I used to be so concerned with that very separation). Heterogenity let me move forward.

So, describing them as acts is a complication at the moment and the heterogenous math is lubrication - at least for me, and as I am the one writing the library I want to see where it leads.


Consider that an act, is a specialized heterogenous operation, not unlike how a homogenous operation is specialized. The benefit that you seek, may not be from the specialization as an act, but from the specialization as an act. With this philosophy, I see no reason acts shouldn’t be included if homogenized specializations are; therefore my goal should be to ensure that they have appropriate separation as to not inhibit any other classes, but be well-integrated enough and included as to provide an optional benefit.


Also, the trivial example is wedging simple vectors: wedge :: v -> v -> Bivector v - not an act. Not a homogenous relation either. Is this a sufficient example?

Addendum: Going further: Because wedging increases the grade, wedging is never an act, except with scalars, because wedge :: Grade i v -> Grade j v -> Grade (i + j) v, and so wedge is only an act when one of grades is 0 aka its a Scalar v, which is when wedging reduces down to / becomes precisely equivalent to standard (Grade k-) vector-scalar multiplication.

It seems to me that you might be able to get the flexibility that you need and preserve type inference in the common case, if you use the operators as constrained aliases for the heterogeneous word form:

class Addition a b where
  type Sum a b :: Type
  plus :: a -> b -> Sum a b

-- | Constrained version of 'plus', for arithmetic convenience.
(+) :: (Addition a a, Sum a a ~ a) => a -> a -> a
(+) = plus
1 Like

Great minds think alike - that is precisely what I did for the alternative homogenous operators that I mentioned but didn’t post - I might as well since you all really want to make sure they exist :slight_smile:

module Bolt.Math.Syntax.Operators.Simple where

import Bolt.Math.Internal.Prelude

import Bolt.Math.Addition
import Bolt.Math.Multiplication
import Bolt.Math.Exponentiation

infixr 8  ^
infixl 7  *, /
infixl 6  +, -

(+) :: (Addition a a, Sum a a ~ a) => a -> a -> a
a + b = plus a b

(-) :: (Subtraction a a, Delta a a ~ a) => a -> a -> a
a - b = minus a b

(*) :: (Multiplication a a, Product a a ~ a) => a -> a -> a
a * b = times a b

(/) :: (Division a a, Ratio a a ~ a) => a -> a -> a
a / b = divides a b

(^) :: (Exponentiation a a, Power a a ~ a) => a -> a -> a
a ^ b = pow a b

Obviously mine are eta expanded but that’s a matter of preference. I will also probably be adopting specialized numhask-style left and right act operators because I have decided that I like them, though they will get their own particular operator module in Bolt.Math.Syntax.Operators.Act or something.

1 Like

I cut my Haskell baby teeth on that library, it was a wealth of archeological knowledge. It had a History monad that became my perf library and really should be a better known pattern for convergence and anything loopy, and numhask & harpie are spiritual descendents of the patterns in there.

It was too disconnected from everything else to be practical.

1 Like

Yes, I went looking in numhask and you might be right.

QuotientField seems to work very well in an action context, but the other heterogeneous bits also look suspiciously like actions in disguise.

I’m not claiming that all mathematical operators are either homogeneous or correspond to an action, that would be a big claim! That is, surely many operators have result types that correspond to neither of the input types.

Perhaps I would question whether you need to an abstraction that covers both addition on integers and the wedge product on vectors? Is the goal just to have a great degree of notational flexibility, so you can use + for the wedge product if you really want?

1 Like