Since Haskell has linear types so we can track resources, why can't we disable the GC like in ATS?

ATS: Introduction to Programming in ATS.

2 Likes

Why would you want to disable the GC? If it is just about removing pause times or reducing GC “overhead”, then you can just as well reduce the heap size (and thus GC time) by storing all your data in off-heap types such as those provided by linear-base.

3 Likes

Just a curiosity, not for any purpose, I know modern GC’s are really efficient, i was only wondering why it’s not possible.

A quick glance at that documentation suggests that ATS has strict semantics. If that’s correct, then what you have to remember about Haskell is that it has non-strict semantics, and the associated support mechanisms for those semantics traditionally relied on GC.

So turning off GC would also require the programmer to track usually-invisible runtime resources along with those of the actual program - that would be a lot of extra work! This is one major reason why programming in declarative languages like Haskell or Prolog usually requires less code: the runtime system spares you from most of the tedious and repetitive “low-level minutiae” you would have to contend with in more imperative languages, particularly the older ones…

My guess is maybe someday we could have something like this. But LinearTypes today are only one step and not nearly enough as implemented.

…actually, no linear types are required:

An even easier solution is just to never deallocate memory. From the user guide I’m getting the impression that’s what ATS does if you disable the GC:

However, if we want to run code with no support of garbage collection (GC), then we should definitely avoid using boxed tuples.

Therefore, calling functions like sqrmodsum can readily result in memory leaks in a setting where GC is not available.

And of course C also happily allows you to allocate memory without forcing you to free it up at some point.

(I should add that although ATS allows you to introduce memory leaks, it also allows you to write linear functions which prevent those memory leaks. Haskell is kind of similar except that you pretty clearly have to use some unsafe or foreign functions to introduce leaks.)

2 Likes

…maybe in a language like Erlang, where subtasks can be supervised and restarted if they fail (rather than the entire program being “supervised” by the resident OOM “debt-collector” ).


…as does Racket, Python, Julia, Java, Go and D: no great surprise there. Doing anything more than that usually requires more substantial changes:

Lively Linear Lisp — ‘Look Ma, No Garbage!’ (1992)

So first off, what your thinking about is (necessarily) uniqueness types (like in Rust and C++), not linear types.

Uniqueness types are properties of values / types that say that a value has not been duplicated. If a value has type which is unique then you know that there are no copies of it. Linear types however are properties of bindings / arrows. If a function is linear, then you know that it can’t duplicate what gets passed in. Lastly, there is necessarily / essentially unique types, which are uniqueness types which must stay unique and the kind that C++ and Rust implement (which they call ownership).

Haskell only having linear types is why linear arrays and alike have a clunky CPS encoding. They need it to emulate necessarily unique types.

I would recommend reading Linearity and Uniqueness: An Entente Cordiale for more explanation on linearity vs uniqueness and possibly Making Uniqueness Typing Less Unique if want more in-depth info on uniqueness types.

Second off, even if Haskell had uniqueness types (or somehow avoids GC from CPS encoding them) and fully embraced them it would still not be enough. You would need some sort of lifetime / region system to compliment them.

Necessarily uniqueness types don’t let you reason about aliasing (aside from it’s absence). Like in C++, necessary uniqueness prevent forgot to free and double free errors but don’t prevent use after free errors. What you would also need is a lifetime / region system (famously, like in Rust) to allow for safe borrowing of a unique value and/or the ability to create memory that can freely alias and gets freed when it goes out of scope.

Haskell in some sense already has regions, as the ST monad can be generalized into regions (see links below). Even with this generalization, it’s still required to use a garbage collector as plain lambda terms still exist. You would need to deeply integrate regions into Haskell if you want to avoid that.

For more reading I would recommend Monadic Regions on how to generalize ST into regions or Monadic and Substructural Type Systems for Region-Based memory Management on how to make such a system (plus all the previous paper’s information).

5 Likes

The uniqueness and regions problems also seem to be solved by the recent work on Linearly Qualified Types, there’s also a presentation about it from ICFP’22:

TL;DR/W: It allows you to write functions like this:

-- instead of the ugly CPS encoding
newMArray :: Linearly %1 => Int -> MArray a

-- borrowing-like
set :: RW n %1 => Int -> a -> MArray n a -> () ⧀ RW n

swap :: RW n %1 => Int -> Int -> MArray n a -> () ⧀ RW n
swap i j as =
   let
     !Ur ai = get i as
     !Ur aj = get j as
     !() = set i aj as
     !() = set j ai as
   in ()
3 Likes

Would a MArray-version of swap like that need an {-# NOINLINE #-} pragma to avoid “surprises” like:

swap i w as
      ⋮
swap i x as

being expanded to:

   (let
     !Ur ai = get i as
     !Ur aw = get w as
     !() = set i aw as
     !() = set w ai as
   in ())
      ⋮
   (let
     !Ur ai = get i as
     !Ur ax = get x as
     !() = set i ax as
     !() = set x ai as
   in ())

and then the duplicate sub-expression (get i as) being replaced with one single shared expression:

   let !Ur ai = get i as in
   (let
     !Ur aw = get w as
     !() = set i aw as
     !() = set w ai as
   in ())
      ⋮
   (let
     !Ur ax = get x as
     !() = set i ax as
     !() = set x ai as
   in ())

…?

No, you shouldn’t need pragmas. As the abstract says:

Soundness of linear constraints is ensured by the fact that they desugar into Linear Haskell.

Linear types permeate all the way into Core, so optimizations should preserve them, although I believe that is not yet checked at the moment. @romes is working on that. But any problems with that should be seen as bugs in GHC, not as problems with linear constraints.

Also, the second argument of swap is not a good example because that’s just an integer which may be shared without causing any problems. It is the third argument as which is the mutable array that would cause problems if it were shared improperly.

Edit: nevermind, I misread your post. The linear constraints will be desugared to an evidence passing style which works similarly to the state monad:

swap i w as
      ⋮
swap i x as

Desugars to something like this:

let s1 = swap s0 i w as in
      ⋮
let s2 = swap s1 i x as in

And that could get inlined like this:

   let s3 = (let
     !Ur ai = get s0 i as
     !Ur aw = get s0 w as
     !s1 = set s0 i aw as
     !s2 = set s1 w ai as
   in s2) in
      ⋮
   let s6 = (let
     !Ur ai = get s3 i as
     !Ur ax = get s3 x as 
     !s4 = set s3 i ax as
     !s5 = set s4 x ai as
   in s5) in

So there are no duplicate subexpressions.

2 Likes

[…] an evidence passing style which works similarly to the state monad […]

That’s interesting - Single-Assignment C uses a similar technique to help automate the use of stateful entities: