Why imperative code is worse than functional code

Continuing the discussion from Beautiful functional programming:

@tomjaguarpaw asks the question why people think imperative code is worse than functional code. I think that is an interesting discussion topic.

First of all, I personally wouldn’t really say functional, but instead denotative.

My inspiration for striving for denotative code comes from the podcast and ZuriHac presentation of @conal. He can explain it much better than me in a short discourse post:

Some memorable points that resonated with me are (paraphrased):

  • The Von Neumann bottleneck is about more than just the bottleneck between the CPU and RAM, the more important bottleneck is that it limits our thinking about computation.

  • The real world is not sequential, it is massively parallel. Even modern computers themselves have had to admit parallelism and now have to try very hard to pretend to be sequential.

  • There is no large and small in thought, so approaches that do different things at different “levels” won’t stand the test of time. For example: “functional core, imperative shell”.

Of course many programmers have to write practical programs and write them now. And the theory of computation is currently not well-developed enough to give definitive answers for many practical problems. However, I think we should also keep thinking about what future we are heading towards.

I’m sure there are people who disagree with this point of view, so I’d love to hear your thoughts.

13 Likes

From an engineering point of view, I don’t think an imperative style is inferior to a declarative style when both are equally correct and SOLID with respect to surrounding code. The only edge that a declarative style has is that is inclines the reader to use equational reasoning, which sometimes helps with shifting the focus between the general intent of the code and implementation details. For me it functions like a mental “collapse/expand” switch which helps reading big code bases.

The rest of the distinction between the two styles is a bit overrated, since all languages have constructs and idioms that borrow from either side.

Thus I think it is more perspicuous to compare not just between the two styles, but the full package “syntax + semantics”. At the end of the day, I think all languages strive for code that is easy to reason about, easy to type-check, and easy to optimize by the compiler, and I cannot image that “imperative versus functional” makes much sense for this task. You need both, in different portions of the code.

1 Like

Yeah. What limits my thinking (for example with that Python code in the Beautiful thread) is destructive update: you’re two levels deep inside a data structure in a nested for; one branch of the if resets a global variable; the other branch increments it; the possibly-incremented value is updated into the data structure. Now try equational reasoning over the effect on the overall data structure.

Imperative code doesn’t have to be that confusing: don’t use global vars/always pass them as params; don’t update global data structures in situ/return the updated version (copy) from the function; apply destructive update only to local vars, where you can see their whole scope in one screenful of code.

But OOP pretty much forces you to use destructive update to invisible/abstracted elements of a data structure.

Destructive update does make sense in IO: program variables are mirroring physical changes or database changes in persistent storage outside the program run. So I love the discipline that the IO Monad imposes.

1 Like

Given a correct imperative program and a correct functional program, there is indeed little difference. However, it will probably be much easier to determine the correctness of the functional program in the first place.

Imperative programming comes with more complex semantics, which often makes writing correct programs more challenging. Conal believes the primary reason for this is that the imperative approach can cloud the simple specification of a program’s correctness, subsequently making verification and optimization tougher.

I think many functional programmers would likely nod in agreement, pointing out that imperative programming leans heavily on instructions and the ‘how-to’—muddying the code with considerations about representation, sequencing, memory management, and so on.

That said, I believe functional, or rather denotative programming, offers a superior method for achieving clear and correct programs. This is primarily because it allows for simpler and more precise semantics. This simplicity is palpably evident in his Denotational Design methodology, which often imparts a sense of profound inevitability.

2 Likes

I always understood the difference between imperative and functionnal as “how to modify the current state to end up in the desired state” vs “describe the desired state”.
Depending on the problem one might be easier than the other, but usually the former it quicker to write buth the later is easier to test for correctness.
Ultimately it’s a matter of trade off, but by chosing Haskell we already made a choice.

But that isn’t how computers perceive their surroundings: all outside interactions between a computer and the rest of reality occurs via a finite number of I/O registers. These registers have two properties:

  • shared - each program on the computer ultimately has to use the same set of registers, these days quite often concurrently.

  • mutable - as there is only a finite number of them, registers have to be reused which necessarily changes their contents.

So the computer perceives its surroundings as being a shared, mutable resource. That means there are actually two von Neumann bottlenecks:

  • CPU and memory addresses - where declarative/denotative techniques work;

  • CPU and I/O registers - where declarative/denotative techniques fail.

This is why I now consider I/O to be a mathematical problem - right now, there is no mutually-agreeable mathematical entity which can be used to denote simple teletype I/O:


[[ getChar ]]Haskell = …

[[ putChar ]]Haskell = …


…let alone the entirety of I/O in Haskell 2010, FFI and all!

So why is imperative code worse than functional code? I say it’s because we’re all interested in writing our programs using (as mathematical as possible) functions, rather than some imperative emulation of subroutines (particularly those with observable effects).

2 Likes

But this is probably the main reason why we generally consider imperative code to be worse:

…in general, we just aren’t that “imperatively inclined” :-)

P.S: while Conal definitely brought it back to prominence, there have been other calls for liberation from I/O, at least in Haskell:

Can GUI Programming Be Liberated From The IO Monad? (2005)

Hmm? That article seems to be talking about counting and estimating relative quantities.

I think the important capability for programming is symbolic reasoning/algebra. It’s true that in the school curriculum that’s taught under Maths/with numbers; but it could equally be taught with syllogisms or logic/inference over propositions. Indeed it might be better to teach it quite separately from Maths: strong reasoning is an important skill for adulthood, whether or not you go on to any Math/science studies; I know people with sound reasoning skills, but who just ‘freeze’ when faced with arithmetic.

The reason I get nervous when reading imperative code is the risk of effects outside of the scope of the code in front of me. A Turing machine’s tape is infinitely long in both directions: can I be sure this code will affect only this tiny patch of the tape? Will some distant piece of code start snooping on my locations and either upset my workings and/or my internals/‘leftovers’ affect them?

So maybe this is much more about encapsulation than model of computation? If I have argument (immutable) and result only, I don’t have to worry about implementation details leaking out.

1 Like

Agree violently - the problem however is that teaching abstract concepts without a good intuition for concrete applications / examples is doomed to fail.

Traditionally, Math has used arithmetic and geometry as the main application domains for teaching. Arithmetic is of course an important civilization skill, and it should be taught in schools, but due to the traditional link between arithmetic and Math, the two get lumped together, and you get a curriculum where students spend 5-10 years learning arithmetic before getting to a point where you can abstract from all that and start venturing into the realm of symbolic reasoning.

The question, then, is what would make for a good concrete application domain for teaching these things? The need for arithmetic is an easy sell - we need to calculate things in daily life all the time, and people who struggle with arithmetic are clearly at a disadvantage in all sorts of situations.

Symbolic reasoning and formal logic are much harder to convince people of. They shouldn’t, but the reality is that logical fallacies are commonplace, and in mainstream Western culture at least, it is much more embarrassing to think that 6 + 8 equals 12 than to confuse absence of proof with proof of absence.

The two go hand in hand.
A pure expression is naturally encapsulated; an effectful expression is not, because it has “hidden” dependencies on the outside world. The kind of “encapsulation” you get in typical OOP languages is pretty weak - an object can hide its internal state from the outside world, and that encapsulation boundary helps somewhat, but there are two big problems with it:

  • Because object state is mutable, and object methods can take and return mutable references, action-at-a-distance is still commonplace, so even though you aren’t exposing your internals directly, the rest of the world will still have references through which your object state can be manipulated, and the arguments you accept can still be manipulated from outside after they have been passed to you. And of course mutability isn’t the only effect that matters here - pretty much any other effect violates encapsulation in similar ways.
  • The object’s interface is the only encapsulation boundary you get - you can only hide object internals from the outside world, and while you get some compositionality by defining more objects, this is a manual effort, and you can’t divide your object up into smaller self-contained units. Compare that to pure expressions, where you can zoom in and out to arbitrary levels of detail, and at each level, your expressions are self-contained, fully encapsulated units that you can safely scrutinize in isolation.
2 Likes

Correct. The monadic ST type exemplifies this: runST ensures that no imperative-implementation details are exposed to callers of functions which internally rely on ST.

1 Like

Correctness is only one aspect. We need performance/scalability too. Nowadays that means that programs must be able to take advantage of parallel hardware. The sequential nature of imperative programs makes it very difficult to do that.

2 Likes

One can only have scalability, performance with correctness / correctness specification otherwise the empty program would be the best candidate for the fastest program.

Correctness goes hand in hand with optimization. One needs to have a proper notion of what it means for a program to be correct in order to be sure your optimization step didn’t break it.

Edit: this means that imperative code which makes it much harder to reason or come up with a correctness specification for hinders performance.

1 Like

…unless there’s been some theoretical “advance” or “breakthrough” which I’m not aware of.

Logic gates:

  • students can be introduced to the idea of encoding whether something is true or false using a simple switch and light (or LED).

  • then students can be introduced to those “tried and tested” basic combinations of switches: AND, OR, NOT.

  • at the point when the complexity of wiring diagrams and physical components everywhere has students frustrated…introduce the basics of Boolean algebra!

But I am definitely not an educator: I will defer to their opinion on an approach like this.

What point are you trying to make?

A programming language is not enough for a specification for what it means for a program to be correct.

Imperative language do have operational semantics and I can still write a sorting program and pull my hair out trying to understand and reason about it or even trying to prove it correct according to what it means for sorting to be correct. If I struggle so much to prove something like sorting how can I begin to understand how I can optimize without breaking correctness?

The operational semantics of an imperative language is its downfall.

…actually, it was an attempt at irony - namely that your comment about specification strongly resembles a paragraph from a textbook written 30 years ago. But your response does seem to answer my enquiry regarding:

some theoretical “advance” or “breakthrough” which I’m not aware of.

…there hasn’t.


The operational semantics of an imperative language is its downfall.

…did you mean “denotational semantics” ? I’ve seen a few operational semantics for (small pedagogical) functional languages and they can get quite intricate rather quickly as more “practical” details are included.

Oh okay :grin:

I said operational because that’s usually the case, there aren’t many imperative languages with a denotational semantics

Is there any evidence to back up this conjecture?

1 Like

No and there is even a easy counter example. The annotating session lesson problem refered in this thread.
Unlike any of proposed alternative, the Python solution is trivial to check for correctness.

But when thjngs grow more complex I believe this conjecture holds.