Hmm. It is a bit unclear to me what exactly you mean. Perhaps let me try to provide a non-answer with an explanation of how I think Liquid Haskell could work with guarded recursion.

First off, you can simply postulate the (singly-clocked) Later modality as

```
newtype Later a = MkLater a deriving (Functor, Applicative) via Identity
-- perhaps got DerivingVia wrong here; you get the idea
```

Then export `Later`

, but not `MkLater`

. I think this is the “unsafe” runtime system module that you are referring to. (You likely need more primitives besides `Applicative`

that can be proven safe in the underlying topos model.)

Then you define the general guarded recursion fixpoint combinator as follows (i.e. it it *not* primitive!)

```
gfix :: (Later a -> a) -> a
gfix f = f (pure (gfix f)) -- NB: this is safe
```

However, `gfix`

enables the definition of infinite, non-well-founded structures. I have my doubts that Liquid Haskell’s termination checker would be happy about this if it keeps unfolding `MkLater`

. This paper indicates that it currently is not happy, and suggests step-indexed induction as a model of coinduction. I think that solves two out of three problems best thought separately:

- Soundness in the presence of non-well-founded codata
- A way to do coinductive proofs by integrating productivity checks into the (refined) language
- A way to do coinductive proofs by way of doing a step-indexed proof in an underlying step-indexed inductive model of coinduction.

As far as I can tell, the paper solves (1) (don’t pass `--no-adt`

) and (3) (I wonder how this is compatible with `--adt`

?? They do not seem to step-index the data types!), but not (2). Instead of providing a productivity checker, it relies on the user to do step-indexing. That’s well and good for simple first-order types such as streams with a simple encoding of observational equality, but is unlikely to work well for higher-order, negatively recursive types such as `Value`

.

**Problem 1**: What does it take to make LH sound wrt. the Later modality?

I think it suffices to prevent LH from deriving an induction principle for `Later`

(so LH must know about it). That is, following `P a`

from `P (Later a)`

is **utterly** unsafe and we need to hide this “implementation” from LH. One way to achieve this is to keep `Later`

opaque and tell the SMT solver just about the applicative laws (such as `pure f <*> pure a = pure (f a)`

), but not about how `pure`

and `<*>`

are implemented.

To understand the qualitative difference between Problem 2 and Problem 3, let me give an analogy.

I think we agree that programming in C is more convenient than programming in assembler. Among other things, the C compiler ensures a proper stack and register discipline (you don’t need to “prove” that your program adheres to this discipline).

But sometimes writing C is “not enough” and you really need to write assembler, for example when writing an operating system kernel or a driver, because the C language does not offer the necessary low-level, architecture-specific abstractions.

By analogy, proving in a guarded type system is more convenient than proving in the underlying (step-indexed or Scott) topos model directly. You can simply forget about the mathematical details of the underlying model and live in your axiomatised world (“each function call gets fresh local variable slots”).

This argument is similar to the argument of blog post that reasoning on discretely-ordered total elements is more convenient than on partial ones.

Sometimes (rather frequently, actually), staying within the applicative functor interface/DSL/internal language of the Later modality is insufficient. (I think this is an example.) In this case, you really need to unpack the underlying model and attempt a step-indexed proof for the primitive that you need (called `flip-tick`

in my example).

So, problems (2) and (3) are about writing convenient and useful tools to write coinductive proofs. (2) is more about high-level convenience and (3) is about the “assembler to make it happen” if (2) is insufficient.

**Problem 2**: How do we write convenient LH proofs involving Later?

I think we would need to teach LH about *Löb induction*, which is `gfix`

generalised to `a :: Prop`

.

That would probably allow you to do all kinds of proofs about programs in the Applicative fragment. (I believe the main benefit of *Ticked* Cubical Type Theory is that the Applicative reasoning becomes part of definitional equality, so you don’t need to do proofs involving Applicative laws.)

This would not be too much work, I think. At this point, LH would be on par with the (admittedly still infant and mediocre, if I may say so) treatment of guarded types in Agda, Rocq or Lean, where you axiomatise all combinators and equational theory.

I you want more combinators than just Applicative (and you likely will), then you need more axioms that describe how these combinators can be used safely. I think you can do this by introducing new uninterpreted functions for combinators and `assume`

axioms about their equational theory.

For simple kinds of combinators, this might work well, because the axioms are “obviously true”.

However, every axiom is a source of unsoundness unless you have proved it.

**Problem 3** is about providing users a way to prove correct their axioms in an underlying topos model.

(Semanticists like Jon Sterling tend to do these proofs on paper. I believe that is for lack of a good solution to this problem in today’s proof assistants.)

In order for that to work, LH would need to reflect the Haskell definition of the combinator and the definition of the axiom into this topos model (via a denotational semantics, from Haskell into a suitable Haskell DSL backed by SMT logic) and let the user provide a proof there.

Finding the right Haskell DSL here is the subject of synthetic guarded domain theory (= axiomatisation of an adequate topos), if I understand correctly; an active research area.