I have been reading a little bit about the so-called Laws of Form. Dan Piponi has a complicated post with Haskell code from several years ago.
My investigations are much more basic. I’ve got a reduction algorithm working that uses the fact that the elements of lof are like a Dyck language, so strings of parentheses will work as a representation.
test1 = "(()())"
test2 = "((()))"
test3 = "()()"
fix f x = let x' = f x in if x == x' then x else fix f x'
reduce :: String -> String
reduce ('(':')':'(':')':rest) = "()" ++ reduce rest
reduce ('(':'(':')':')':rest) = reduce rest
reduce (head:rest) = head:reduce rest
reduce [] = []
fr = fix reduce
In other words, at the innermost level, occurrences of (()) disappear, and occurrences of ()() become (), and this continues until you have either “” or “()” left.
This seems to work fine, but what I’m wondering is how to define the same algorithm using an algebraic data type. I’ve tried things like
data Form = Empty | Mark [Form]
but writing nested values of that type gets confusing. and writing the reduce function is even more confusing. I wonder if there is a simple implementation that I am missing.