Yes, but unfortunately the open door is still in scope!
ah I didn’t see the ask was to mutate the type of door… yes that’s quite different.
Possibly some combination of my snippet + linear types + allow name shadowing?
I remember the 90s, when my computer was running on a PowerPC CPU, and there was this functional language, called Clean, which was essentially a dialect of Miranda / Haskell, but with some kind of linear types. Instead of writing :: IO a
, people would write :: *World → (*World, a)
, a function type where the World
argument could be used only once.
So, yes, mutability is possible in a Haskell-like language using linear logic.
The real question is: Why would you want to do that? Does it make your code faster to write, easier to prove correct, or faster at running?
The real question is: Why would you want to do that? Does it make your code […] easier to prove correct?
Possibly, yes:
…I only say “possibly” because advances in proof techniques and tool may now have simplified the use of abstract monadic state in correctness proving. But I will defer to the experts here on that topic!
Other work by those and associated authors include:
-
Comparing Proofs about I/O in Three Programming Paradigms (2001)
-
Proving Correctness of Programs with IO - A Paradigm Comparison (2001)
What I meant was more “why mutability in the first place” rather than the more specific question of “why linear logic for mutability”. It seems clear to me that a Haskell program which does not mutate explicitly is easier to reason about than a program which does.
From the abstract of State in Haskell (1995):
-
…with the caveat that some of those algorithms have now been superseded by “mutability-free” alternatives - otherwise, there’s still the type
ST s a
, and most importantly:runST :: (forall s . ST s a) -> a
-
…with the caveat that:
…oh, that’s right - they didn’t. But they’re not the only ones wanting Haskell to be rid of
IO a
:
A slight correction: a set
parameter in Hylo is not write-only, but must be written before it can be read, because it enters the function in an uninitialized state.
The premise of Mutable Value Semantics is that mutation itself is not troublesome; instead the problem is the mutation of shared state. Obviously, the global variables mentioned in the article you cite are extremely shared.
But mutation by itself is troublesome:
…and mutation is the epitome of side effects! As I mentioned in this post on a related topic:
If I were to design a new language, I’d definitely try to have mutable value semantics (in the pure fragment of the language, of course). More recent languages like Lean or Koka do that very successfully. If we had a good macro system, we could just copy the Lean approach, which desugars to simple, pure state passing, just like what the State
monad would do, but less clumsy. It is the obvious idiom to apply for many kinds of algorithms.
[…] simple, pure state passing, just like what the
State
monad would do, but less clumsy. It is the obvious idiom to apply for many kinds of algorithms.
Sure, the macro system would be dealing with the extra state parameters and arguments…but the code will still look imperative. So either approach isn’t really going to make that much of a difference with regards to aesthetics.
But mutation by itself is troublesome:
The significant increase in complexity caused by mutation is almost entirely due to sharing.
There is nothing intrinsically more complex about this program compared to its “pure” counterpart:
var p = (x: 1, y: 2)
p.x = g(p.y)
p.y = g(p.x)
print(p)
Without even mentioning a specific programming language, I strongly suspect that everyone reading this thread knows what the intent of this program is and can predict its outcome. Therefore, renaming each intermediate value of p
for the sake of immutability would arguably just add noise to this particular program, while collapsing everything into a single expression would make it harder to read. And if =
offends, one can always pick another symbol.
This program is not only obvious to humans but also to compilers. While smart developers and optimizers can understand that p' = (x: g(p.y), y: p.y)
is actually equivalent to a partwise in-place update of p
, it is actually not an easy guess. To the best of my knowledge, a general procedure for all functional updates has yet to be discovered. In contrast, p.x = g(p.y)
is concise and straightforward.
To me, these observations suggest that there’s an intrinsic value to mutation that is worth preserving and goes beyond convenience. Instead, it relates to some “truth” about the act of computing anything, especially if that computation takes place in the real world. Not everything is better expressed as a declarative relation (O’Neill 2009, The Genuine Sieve of Erathosthenes); I prefer baking using imperative recipes.
Losing confluence does indeed increases complexity. But again, when sharing off the table, the only issue is to define a precise evaluation order so that people (and machines) can reason about it. That is not as troublesome as it may sound. One can pick innermost, left-to-right and then an expression like f(p)(g(p.x), g(p.y))
is deterministic and unambiguous.
I strongly suspect that everyone reading this thread knows what the intents of this program is and can predict its outcome.
I don’t think I can understand it! Let me try. First p.x
is set to g(p.y)
. Then p.y
is set to g
applied to the (presumably) new value of p.x
, that is g
of the original p.y
? So the final value of p.y
is g
applied to g
applied to the original value of p.y
?
If that’s really the intention then I think I’d prefer to see
p' = (x : g(p.y), y : g(g(p.y)))
Isn’t that the “pure counterpart” you’re referring to? I certainly see it as less complex.
Then we can agree to disagree. There’s certainly a part of subjectivity in what one considers “complex”. I remain convinced that some algorithms are better expressed with mutation, for both humans and compilers.
It is unlikely that there exist a single programming paradigm categorically better than all others. Following this premise, the interest of MVS (i.e., the trend mentioned by OP) is to offer a framework in which functional and imperative programming can mingle with relative ease.
Then we can agree to disagree
We could, but that might be leaving some value on the table! It would be enlightening if you could help me understand why it’s clearer to write a program whose result depends on the order of execution of its lines to one where there is no such ambiguity.
It is unlikely that there exist a single programming paradigm categorically better than all others. Following this premise, the interest of MVS (i.e., the trend mentioned by OP) is to offer a framework in which functional and imperative programming can mingle with relative ease.
These two sentences seem not quite contradictory, but certainly somewhat at odds. There may be no best paradigm but mutable value semantics certainly sounds like it’s capturing the best of both worlds, doesn’t it? That is to say, a language with mutable value semantics could be “better” than both any given functional language and any given imperative language.
By the way, this is why I’m excited about simple effect systems like effectful
. They basically allow (and I would say encourage) writing Haskell in a way that combines functional and imperative styles. In Haskell the imperative style has to be supported via a monad. A language natively supporting mutable value semantics may be less clunky.
It would be enlightening if you could help me understand why it’s clearer to write a program whose result depends on the order of execution of its lines to one where there is no such ambiguity.
I would say that it is because people tend to read words with an other. If anything, a let expression has a “reading” order that matches the natural way French speakers like myself read text. When I see let y = g(p.y) in f(y)
I tend to first map y
to some value before I read f(y)
. Having an evaluation order match the natural reading order most people use doesn’t seem to be a bad bet to me.
I see little value in the ability to read a program in any order because that is most likely not the way I will read the program anyway. In contrast, having an evaluation order makes program easier to compile. So I’m willing to sacrifice confluence of arbitrary terms.
Also, note that there is no ambiguity once we’ve picked an order. That is certainly an arbitrary decision to make, but once it has been made, the ambiguity goes away. Many things have order (e.g. pattern matching in Haskell). It does not mean they are ambiguous.
That is to say, a language with mutable value semantics could be “better” than both any given functional language and any given imperative language.
A lattice needs not to have a top. I do think a language with MVS is “better” than a purely functional language or a purely imperative one, but I don’t think it is necessarily the best possible way to write any program. Concurrency creates problems quite difficult to solve in a strict MVS world.
A language natively supporting mutable value semantics may be less clunky.
I can certainly believe that =)
Having an evaluation order match the natural reading order most people use doesn’t seem to be a bad bet to me.
Sure, but firstly, why impose any order at all, and secondly, it’s not just about order. To the first point, you can read pure expressions in your “natural reading order” if you like. You’ll get the correct answer. You can also read them in any other you like (although to make most sense you probably want to read definitions before usages), you’ll still get the correct answer!
To the second point, it’s not just about order, it’s about variables having multiple definition sites. For example, take the following program, similar to yours just longer. I can understand it by reading it in order, of course, but even when I do so I have to keep the redefinition of p.x
in my working memory to correctly interpret p.y = g(p.x)
. By contrast, in a pure language p.x
can only have one definition site, so when I read p'.y = g(p.x)
I just look up the definition of p.x
and I’m done.
var p = (x: 1, y: 2)
p.x = g(p.y)
... many lines of code ...
p.y = g(p.x)
print(p)
having an evaluation order makes program easier to compile
I doubt the difference has much practical significance. Isn’t the difference resolved by a simple topological sort? Is that too costly to write, maintain or run?
why impose any order at all
Because some algorithms have a notion of state, and state updates are best expressed with an evaluation order. The paper from Melissa O’Neill makes that point far better than when she describes the Sieve of Eratosthenes like this
We start with a table of numbers (e.g., 2, 3, 4, 5, . . . ) and progressively cross off numbers in the table until the only numbers left are primes.
And then demonstrates how a naive translation of this imperative description into a pure functional style is a hopelessly inefficient.
To the second point, it’s not just about order, it’s about variables having multiple definition sites. For example, take the following program, similar to yours just longer.
Longer programs are harder to read, whether or not variables have one definition. Otherwise SSA would have become the preferred way to write C code.
Even in pure formal definitions one breaks the one-definition rule every time one uses quantification. Again, that suggests a dogmatic application of the constraint is impeding the “natural” way to express certain things.
Exceptions to the one-definition rule are acceptable when they happen in a narrow scope. Again, there’s a bit of subjectivity here. I might claim that my program example is small enough to tolerate redefinitions and you may disagree. I’m not trying to convince you otherwise.
Also note that without sharing, a language capable to enforce immutability can give their true meaning to qualifiers like “const”, “val”, etc. We can write Haskell-like let expressions in Hylo, including a one-definition constraint.
I doubt the difference has much practical significance. Isn’t the difference resolved by a simple topological sort? Is that too costly to write, maintain or run?
It is not only about determining an order between sub-expressions. It is also about determining the constraints on the memory in which data is actually stored. As I mentioned, recovering the efficiency of an in-place update in general is a heroic task with no strict guarantee of success. The paper about MVS mentioned by OP shows what that means in practice with some benchmarks.
The sieve of Eratosthenes is a good example. Here’s how I would implement it in a pure and functional style:
sieve :: Vector Int -> Vector Int
sieve v = V.filter (> 1) $ foldl' step v [0 .. V.length v - 1] where
step v i
| let p = v ! i, p > 1 = mapFromStride (const 1) (p * p) p v
| otherwise = v
I don’t think this is harder to understand that an imperative implementation, and this is still very efficient. This can sieve more than 100,000 in a second if mapFromStride
creates a copy or more than 10 million if you “unsafely” make it in-place. The new “functional but in-place” paradigm would safely recognize that this update could be done in-place.
Otherwise SSA would have become the preferred way to write C code.
It seems immutability is preffered, at least for C++: C++ Core Guidelines
Con: Constants and immutability
You can’t have a race condition on a constant. It is easier to reason about a program when many of the objects cannot change their values. Interfaces that promise “no change” of objects passed as arguments greatly increase readability.
And of course Rust goes even further by making all variables immutable by default.
To be clear about my stance, I agree with your conclusions. I am a proponent of imperative programming and I don’t believe it is at odds with functional programming. For evidence of this, see my answer to Jose Valim’s challenge which uses (an interface isomorphic to) mutable state. In fact I think effect systems like effectful
are the culmination of the quip that “Haskell is world’s finest imperative programming language”.
I do, however, disagree with your reasoning. I don’t see your small example program as clear, I don’t believe in forcing programming language constructs to appear in a specific order just because people read in a specific order, nor because some problems are best solved when constructs appear in a specific order. I believe allowing pure code to appear in any order, and effectful code to appear in the order necessary to achieve the desired result. One example of this distinction is between Haskell’s (pure) let
expressions and monadic <-
binds.
It is not only about determining an order between sub-expressions. It is also about determining the constraints on the memory in which data is actually stored. As I mentioned, recovering the efficiency of an in-place update in general is a heroic task with no strict guarantee of success.
That’s a fair point. Haskell does allow that, of course, with the State
monad, or STRef
/IORef
for better performance, or unboxed mutable arrays for the best performance. As I said before, I admit that that is clunky but I haven’t seen anything I prefer yet (though I haven’t looked at Hylo).
At this point I am content to agree to disagree