A closed definition is somewhat tricky to write down precisely, because `rseq`

has to look at the whole expression. In prose, the definition is

```
rseq d
= () if d consists only of (possibly inifintely many) applications of C
and only finitely many cost annotaitons are not 0
= ⊥ else
```

It is monotonic, but not continuous. I think the counter example you give to the one here, with a sequence of values where `rset`

is bottom on all these elements, but `()`

in the limit.

But if it isn’t continuous, I’m a bit doubtful it is of use.

That’s what I thought, but Peter Thiemann pointed out that all you lose is that the least fixed point can no longer be found as the ω-limit of the iterating from bottom, but it still exists (as per some variant of Knaster–Tarski).

And you are right: one loses loses the simple fixed-point iteration as a proof principle. (Quick counter example: `∀ d, rseq d = ⊥`

should hold according to that proof principle, based on our examples, but of course it doesn’t.) Maybe there is an analogue proof principle based on transfinite induction. I’ll start worrying about proofs once I wrote down the semantics (I have some promising ideas and might write them down today.)

How about this definition: `rseq`

is the *largest* element of `D →D`

such that

```
rseq (C (c ▹ x)) = c ▹ rseq x
rseq ⊥ = ⊥, else
```

and also

```
rseq d ⊑ 0 ▹ ()
```

holds. Note that here c₁ ▹ c₂ ▹ = (c₁ + c₂) ▹ …, so this accumulates the costs, so one gets `rseq d = n ▹ ()`

if the costs add up to `n`

, but `rseq d = ⊥`

otherwise. The last equation ensures that we don’t get a value other than `()`

in the success case.

That we are using a largest fixed-point here looks rather promising, this does have a coinductive feel to it after all.