Mutable Value Semantics Trend vs Immutability

Then we can agree to disagree. There’s certainly a part of subjectivity in what one considers “complex”. I remain convinced that some algorithms are better expressed with mutation, for both humans and compilers.

It is unlikely that there exist a single programming paradigm categorically better than all others. Following this premise, the interest of MVS (i.e., the trend mentioned by OP) is to offer a framework in which functional and imperative programming can mingle with relative ease.

We could, but that might be leaving some value on the table! It would be enlightening if you could help me understand why it’s clearer to write a program whose result depends on the order of execution of its lines to one where there is no such ambiguity.

It is unlikely that there exist a single programming paradigm categorically better than all others. Following this premise, the interest of MVS (i.e., the trend mentioned by OP) is to offer a framework in which functional and imperative programming can mingle with relative ease.

These two sentences seem not quite contradictory, but certainly somewhat at odds. There may be no best paradigm but mutable value semantics certainly sounds like it’s capturing the best of both worlds, doesn’t it? That is to say, a language with mutable value semantics could be “better” than both any given functional language and any given imperative language.

By the way, this is why I’m excited about simple effect systems like effectful. They basically allow (and I would say encourage) writing Haskell in a way that combines functional and imperative styles. In Haskell the imperative style has to be supported via a monad. A language natively supporting mutable value semantics may be less clunky.

1 Like

I would say that it is because people tend to read words with an other. If anything, a let expression has a “reading” order that matches the natural way French speakers like myself read text. When I see let y = g(p.y) in f(y) I tend to first map y to some value before I read f(y). Having an evaluation order match the natural reading order most people use doesn’t seem to be a bad bet to me.

I see little value in the ability to read a program in any order because that is most likely not the way I will read the program anyway. In contrast, having an evaluation order makes program easier to compile. So I’m willing to sacrifice confluence of arbitrary terms.

Also, note that there is no ambiguity once we’ve picked an order. That is certainly an arbitrary decision to make, but once it has been made, the ambiguity goes away. Many things have order (e.g. pattern matching in Haskell). It does not mean they are ambiguous.

A lattice needs not to have a top. I do think a language with MVS is “better” than a purely functional language or a purely imperative one, but I don’t think it is necessarily the best possible way to write any program. Concurrency creates problems quite difficult to solve in a strict MVS world.

I can certainly believe that =)

Having an evaluation order match the natural reading order most people use doesn’t seem to be a bad bet to me.

Sure, but firstly, why impose any order at all, and secondly, it’s not just about order. To the first point, you can read pure expressions in your “natural reading order” if you like. You’ll get the correct answer. You can also read them in any other you like (although to make most sense you probably want to read definitions before usages), you’ll still get the correct answer!

To the second point, it’s not just about order, it’s about variables having multiple definition sites. For example, take the following program, similar to yours just longer. I can understand it by reading it in order, of course, but even when I do so I have to keep the redefinition of p.x in my working memory to correctly interpret p.y = g(p.x). By contrast, in a pure language p.x can only have one definition site, so when I read p'.y = g(p.x) I just look up the definition of p.x and I’m done.

var p = (x: 1, y: 2)
p.x = g(p.y)
... many lines of code ...
p.y = g(p.x)
print(p)

having an evaluation order makes program easier to compile

I doubt the difference has much practical significance. Isn’t the difference resolved by a simple topological sort? Is that too costly to write, maintain or run?

Because some algorithms have a notion of state, and state updates are best expressed with an evaluation order. The paper from Melissa O’Neill makes that point far better than when she describes the Sieve of Eratosthenes like this

We start with a table of numbers (e.g., 2, 3, 4, 5, . . . ) and progressively cross off numbers in the table until the only numbers left are primes.

And then demonstrates how a naive translation of this imperative description into a pure functional style is a hopelessly inefficient.

Longer programs are harder to read, whether or not variables have one definition. Otherwise SSA would have become the preferred way to write C code.

Even in pure formal definitions one breaks the one-definition rule every time one uses quantification. Again, that suggests a dogmatic application of the constraint is impeding the “natural” way to express certain things.

Exceptions to the one-definition rule are acceptable when they happen in a narrow scope. Again, there’s a bit of subjectivity here. I might claim that my program example is small enough to tolerate redefinitions and you may disagree. I’m not trying to convince you otherwise.

Also note that without sharing, a language capable to enforce immutability can give their true meaning to qualifiers like “const”, “val”, etc. We can write Haskell-like let expressions in Hylo, including a one-definition constraint.

It is not only about determining an order between sub-expressions. It is also about determining the constraints on the memory in which data is actually stored. As I mentioned, recovering the efficiency of an in-place update in general is a heroic task with no strict guarantee of success. The paper about MVS mentioned by OP shows what that means in practice with some benchmarks.

The sieve of Eratosthenes is a good example. Here’s how I would implement it in a pure and functional style:

sieve :: Vector Int -> Vector Int
sieve v = V.filter (> 1) $ foldl' step v [0 .. V.length v - 1] where
  step v i
    | let p = v ! i, p > 1 = mapFromStride (const 1) (p * p) p v
    | otherwise = v

I don’t think this is harder to understand that an imperative implementation, and this is still very efficient. This can sieve more than 100,000 in a second if mapFromStride creates a copy or more than 10 million if you “unsafely” make it in-place. The new “functional but in-place” paradigm would safely recognize that this update could be done in-place.

It seems immutability is preffered, at least for C++: C++ Core Guidelines

Con: Constants and immutability

You can’t have a race condition on a constant. It is easier to reason about a program when many of the objects cannot change their values. Interfaces that promise “no change” of objects passed as arguments greatly increase readability.

And of course Rust goes even further by making all variables immutable by default.

To be clear about my stance, I agree with your conclusions. I am a proponent of imperative programming and I don’t believe it is at odds with functional programming. For evidence of this, see my answer to Jose Valim’s challenge which uses (an interface isomorphic to) mutable state. In fact I think effect systems like effectful are the culmination of the quip that “Haskell is world’s finest imperative programming language”.

I do, however, disagree with your reasoning. I don’t see your small example program as clear, I don’t believe in forcing programming language constructs to appear in a specific order just because people read in a specific order, nor because some problems are best solved when constructs appear in a specific order. I believe allowing pure code to appear in any order, and effectful code to appear in the order necessary to achieve the desired result. One example of this distinction is between Haskell’s (pure) let expressions and monadic <- binds.

It is not only about determining an order between sub-expressions. It is also about determining the constraints on the memory in which data is actually stored. As I mentioned, recovering the efficiency of an in-place update in general is a heroic task with no strict guarantee of success.

That’s a fair point. Haskell does allow that, of course, with the State monad, or STRef/IORef for better performance, or unboxed mutable arrays for the best performance. As I said before, I admit that that is clunky but I haven’t seen anything I prefer yet (though I haven’t looked at Hylo).

At this point I am content to agree to disagree :slight_smile:

Unfortunately, that “truth” still resists a denotative representation:

…but as I noted earlier, it’s mutation which is the actual mathematical misfit.


Some here (including me) would beg to differ:

As further evidence of this, consider the type ST s a. Just like IO a, it facilitates mutation and thus relies on sequencing - surely it must be just as irritating!

But of the two, only IO a seems to attract all the criticism e.g:

…a curious dichotomy.


…more precisely, strict functional and imperative programming:


…which is why Haskell has the type ST s a - see State in Haskell (mentioned here) for the details.