Presentations progress for Hell

There’s a small comment here about progress in my research around adding a pretty printer for Hell. You might find the idea of an interactive lazy printer as intriguing as I do!

12 Likes

Cool. With a longer delay do you batch and redraw or drop events? And do you default to lazy evaluation or is there a specific depth budget at which laziness kicks in?

So the actual re-drawing happens automatically when you update the state in brick. It has a diffing algorithm for the TTY state. I could still batch changes, but don’t presently.

The laziness is universal, but I was thinking about adding optimistic eagerness to any slots that are atomic like numbers, text, etc. Even then, those fields could be expensive to compute or not actually terminate, and in that case the system would handle it just fine and report it as ongoing, or cancelled (by eg timeout or user interrupt).

Nice! Instead of artificial 100ms delays, could one squeeze an IO call to network between the parts of the data structure? I’m asking because especially in SCADA protocols the address range of all available data is often arranged as a tree, with leaves from a finite and statically known set of value types (like Present’s Value). Protocols allow you to “subscribe” to certain leaves of the tree and receive updates. There, the same problem of batch processing arises where you don’t want the display to cycle through a hundred delayed updates when the viewer is only interested in the latest one, at the end of the queue.

I added the artificial delays just to check that the user experience works when there are delays for evaluation.

I’m not sure this answers your question. I’m not familiar with your domain.

However, one silly idea I had was that values of type IO a could be evaluated on demand and then the results displayed inline as yet more presentations. Not the same, but does involve I/O.

I’ve reached full entropy in my test implementation with brick. So I’m going to discard and make a fresh implementation based on what I’ve learned, time permitting.

That’s exactly what I had in mind. Does that make the idea less silly then? Suppose

data Value = A | B -- base types
data Addresses = Addr {foo :: Value, bar :: Value} -- description of API end-points
fetch_foo :: IO Value -- some network API call
fetch_bar :: IO Value
data Fetch = Fetch {fetchFoo :: IO Value, fetchBar :: IO Value}
-- we could express Fetch and Addresses in a common higher-kinded type

addr = Fetch fetch_foo fetch_bar :: Fetch

Since the fields of addr are lazy, the calls to fetch_foo or fetch_bar could be made on demand while presenting the structure in Hell.

So we both had the same silly idea. :joy:

Indeed, sketching this out for Hell, there’d be:

  • Here a case that checks for any value of the shape IO a.
  • An IO-aware UI could show your record like Fetch { fetchFoo = [IO Value], .. } and [IO Value] would be a button that you could push, and it would kick off an async job to evaluate, execute and then present the Value part, similar to the other async evaluation jobs, with the same lifecycle (running, cancelled, excepted, or succeeded).
  • Once the job finished successfully, it could replace the button with the value, or show below it so that you could re-run it if desired.

It gets all a bit beyond the scope of a simple scripting language by that point, but it’s a fun area of exploration. We use laziness for control structures and some handy data structures, but as a data exploration driver I haven’t seen that much.