If I define a type, how can I make it compatible with all functions that accept Num?
Suppose I have this:
data MyNum a = (Num a) => MyNum !a !a
If I write this:
instance Num a => Num (MyNum a) where
(MyNum x1 y1) + (MyNum x2 y2) = MyNum (x1 + x2) (y1 + y2)
(MyNum x1 y1) - (MyNum x2 y2) = MyNum (x1 - x2) (y1 - y2)
(MyNum x1 y1) * (MyNum x2 y2) = MyNum (x1 * x2) (y1 * y2)
negate (MyNum x y) = MyNum (negate x) (negate y)
abs (MyNum x y) = MyNum (abs x) (abs y)
signum (MyNum x y) = MyNum (signum x) (signum y)
fromInteger n = MyNum (fromInteger n) (fromInteger n)
this doesn’t work:
exp (MyNum 1.0 1.0)
Why? exp is not calculated with Taylor’s series? Shouldn’t (+) and (*) be enough.
Also: how can I make it so that when I sum MyNum with Num the sum acts only on the first argument of MyNum?
exp requires its input to be Floating, so you also need to give MyNum a Floating instance.
Yes, you can make sum only work on the first number, but you’d have to be consistent with every other operation too. e.g. plus, fromInteger, etc. would all have to only work on the first number
If you’re trying to do a vector-like type, you can check out the V2 type from the linear library