I’m experimenting with a way to understand gradient operators in a purely functional setting, and I’m curious how people in the Haskell community think about this direction.
My current viewpoint is that gradients naturally live in the cotangent space as covectors, but I’d like to push the idea further and study gradients as functorial constructions. Haskell, with its purity and algebraic expressiveness, feels like an ideal place to begin experimenting with this perspective. The goal is to treat differentiation as a transformation of algebraic structures, and to explore whether categorical tools can give a clean and provable abstraction of AD.
Before diving too deep, I’d love to hear thoughts from people who’ve worked in Haskell. Are there prior projects, libraries, or theoretical frameworks in this direction that I should look at?
Any opinions or pointers would be greatly appreciated.
You might be interested in Conal Elliott’s work on the subject (if you haven’t found it already), for example The simple essence of automatic differentiation . This work takes the approach of calculating correct-by-construction algorithms for forward- and reverse-mode AD from functoriality constraints.
The interpretation of differentiation taught in introductory analysis is:
The derivative of a function f assigns to each point x0 a linear function f' x0 such that f is approximated locally by the affine function
\x -> f x0 + f' x0 x
The problem is that Haskell’s type system can not express linearity of a function, at least not between vector spaces that are not free. Same applies to data representations of non-free cotangent spaces. You can, however, define a suitable type class and attach laws to it, which the implementor should honor.
All this applies to the case when all you know about f is that it’s differentiable. Once you have a symbolic representation of f as an algebraic structure, more is possible. See above posts.