Why would you want to disable the GC? If it is just about removing pause times or reducing GC āoverheadā, then you can just as well reduce the heap size (and thus GC time) by storing all your data in off-heap types such as those provided by linear-base
.
Just a curiosity, not for any purpose, I know modern GCās are really efficient, i was only wondering why itās not possible.
A quick glance at that documentation suggests that ATS has strict semantics. If thatās correct, then what you have to remember about Haskell is that it has non-strict semantics, and the associated support mechanisms for those semantics traditionally relied on GC.
So turning off GC would also require the programmer to track usually-invisible runtime resources along with those of the actual program - that would be a lot of extra work! This is one major reason why programming in declarative languages like Haskell or Prolog usually requires less code: the runtime system spares you from most of the tedious and repetitive ālow-level minutiaeā you would have to contend with in more imperative languages, particularly the older onesā¦
My guess is maybe someday we could have something like this. But LinearTypes today are only one step and not nearly enough as implemented.
ā¦actually, no linear types are required:
An even easier solution is just to never deallocate memory. From the user guide Iām getting the impression thatās what ATS does if you disable the GC:
However, if we want to run code with no support of garbage collection (GC), then we should definitely avoid using boxed tuples.
Therefore, calling functions like
sqrmodsum
can readily result in memory leaks in a setting where GC is not available.
And of course C also happily allows you to allocate memory without forcing you to free it up at some point.
(I should add that although ATS allows you to introduce memory leaks, it also allows you to write linear functions which prevent those memory leaks. Haskell is kind of similar except that you pretty clearly have to use some unsafe or foreign functions to introduce leaks.)
ā¦maybe in a language like Erlang, where subtasks can be supervised and restarted if they fail (rather than the entire program being āsupervisedā by the resident OOM ādebt-collectorā ).
ā¦as does Racket, Python, Julia, Java, Go and D: no great surprise there. Doing anything more than that usually requires more substantial changes:
So first off, what your thinking about is (necessarily) uniqueness types (like in Rust and C++), not linear types.
Uniqueness types are properties of values / types that say that a value has not been duplicated. If a value has type which is unique then you know that there are no copies of it. Linear types however are properties of bindings / arrows. If a function is linear, then you know that it canāt duplicate what gets passed in. Lastly, there is necessarily / essentially unique types, which are uniqueness types which must stay unique and the kind that C++ and Rust implement (which they call ownership).
Haskell only having linear types is why linear arrays and alike have a clunky CPS encoding. They need it to emulate necessarily unique types.
I would recommend reading Linearity and Uniqueness: An Entente Cordiale for more explanation on linearity vs uniqueness and possibly Making Uniqueness Typing Less Unique if want more in-depth info on uniqueness types.
Second off, even if Haskell had uniqueness types (or somehow avoids GC from CPS encoding them) and fully embraced them it would still not be enough. You would need some sort of lifetime / region system to compliment them.
Necessarily uniqueness types donāt let you reason about aliasing (aside from itās absence). Like in C++, necessary uniqueness prevent forgot to free and double free errors but donāt prevent use after free errors. What you would also need is a lifetime / region system (famously, like in Rust) to allow for safe borrowing of a unique value and/or the ability to create memory that can freely alias and gets freed when it goes out of scope.
Haskell in some sense already has regions, as the ST
monad can be generalized into regions (see links below). Even with this generalization, itās still required to use a garbage collector as plain lambda terms still exist. You would need to deeply integrate regions into Haskell if you want to avoid that.
For more reading I would recommend Monadic Regions on how to generalize ST
into regions or Monadic and Substructural Type Systems for Region-Based memory Management on how to make such a system (plus all the previous paperās information).
The uniqueness and regions problems also seem to be solved by the recent work on Linearly Qualified Types, thereās also a presentation about it from ICFPā22:
TL;DR/W: It allows you to write functions like this:
-- instead of the ugly CPS encoding
newMArray :: Linearly %1 => Int -> MArray a
-- borrowing-like
set :: RW n %1 => Int -> a -> MArray n a -> () ā§ RW n
swap :: RW n %1 => Int -> Int -> MArray n a -> () ā§ RW n
swap i j as =
let
!Ur ai = get i as
!Ur aj = get j as
!() = set i aj as
!() = set j ai as
in ()
Would a MArray
-version of swap
like that need an {-# NOINLINE #-}
pragma to avoid āsurprisesā like:
swap i w as
ā®
swap i x as
being expanded to:
(let
!Ur ai = get i as
!Ur aw = get w as
!() = set i aw as
!() = set w ai as
in ())
ā®
(let
!Ur ai = get i as
!Ur ax = get x as
!() = set i ax as
!() = set x ai as
in ())
and then the duplicate sub-expression (get i as)
being replaced with one single shared expression:
let !Ur ai = get i as in
(let
!Ur aw = get w as
!() = set i aw as
!() = set w ai as
in ())
ā®
(let
!Ur ax = get x as
!() = set i ax as
!() = set x ai as
in ())
ā¦?
No, you shouldnāt need pragmas. As the abstract says:
Soundness of linear constraints is ensured by the fact that they desugar into Linear Haskell.
Linear types permeate all the way into Core, so optimizations should preserve them, although I believe that is not yet checked at the moment. @romes is working on that. But any problems with that should be seen as bugs in GHC, not as problems with linear constraints.
Also, the second argument of swap
is not a good example because thatās just an integer which may be shared without causing any problems. It is the third argument as
which is the mutable array that would cause problems if it were shared improperly.
Edit: nevermind, I misread your post. The linear constraints will be desugared to an evidence passing style which works similarly to the state monad:
swap i w as
ā®
swap i x as
Desugars to something like this:
let s1 = swap s0 i w as in
ā®
let s2 = swap s1 i x as in
And that could get inlined like this:
let s3 = (let
!Ur ai = get s0 i as
!Ur aw = get s0 w as
!s1 = set s0 i aw as
!s2 = set s1 w ai as
in s2) in
ā®
let s6 = (let
!Ur ai = get s3 i as
!Ur ax = get s3 x as
!s4 = set s3 i ax as
!s5 = set s4 x ai as
in s5) in
So there are no duplicate subexpressions.
[ā¦] an evidence passing style which works similarly to the state monad [ā¦]
Thatās interesting - Single-Assignment C uses a similar technique to help automate the use of stateful entities: