Why imperative code is worse than functional code

Great counter-example! Also quite plausible that you’d have a basic coverage over some sequence of topics, then Advanced coverage over exactly the same sequence. For each Advanced area you might start "Recap", "Installation", ... to load some extra packages.

(I was going to ask back on the O.P. with its talk of “key-value” whether the key was ‘lesson name’ or the combo ‘section name+lesson name’.)

It does work! It would return a lazily-consumable infinite list :slight_smile: EDIT: although on trying it I discover that infinite input seems to require the lazy State monad :frowning:

1 Like

There are a lot of Haskell versions, so I’d need to know which one you’re referring to, but I think a lot of the solutions would give an acceptable answer, especially if what you’re doing is serialising to something like JSON and streaming it.

The original problem is meant to work on a list of sections, where a section is a key-value data structure containing a list of lessons, where a lesson is also a key-value data structure. Nowhere in the problem statement is JSON mentioned, except for

Here is an example input (formatted in JSON for convenience):

and

The output should be (formatted in JSON for convenience):

I will agree that the code as given will probably work for whatever the intended use-case was, but “containing two references to the same object in a collection being traversed while performing mutation” is not some really obscure edge case, it’s something that occurs all the time.

2 Likes

…and all of that is “a bit” of a:

Why? Because the questionable use of simple(ton) examples has probably occurred before:

  • no doubt there were various snippets of assembly code which purported to show the superiority of assembly over that “new-fangled” Fortran and LISP stuff.

  • there probably were analogous snippets of code showing the superiority of goto over structured programming.

  • and one can only imagine all the snippets of code purporting to show the superiority of manual memory manglement management over GC!

They are all examples of “storms in teacups”…the teacups found in child’s-toy dollhouses!


Getting back to the actual topic of this thread:

Mutation.

While The Anatomy of Programming Languages also nominates sequencing as being a problem:

  • arbitrary sequencing but no mutation: confluence.

  • arbitrary mutation but no sequencing: nondeterminism!

Standard ML and OCaml both illustrate this point - at first glance, definitions in both languages have little to no resemblance to the rambling sequences of statements found in the likes of Pascal or C. But that lack of visible sequencing is irrelevant: due to the relative ease of access to mutation both provide, Standard ML and OCaml are also imperative languages.


Alright, so what about Haskell and its use of seemingly-inexplicable interfaces?

…for something better (or a proof that none exists, analogous to that for the Entscheidungsproblem).

2 Likes

This is one of my favorite things about functional code. In imperative code, in order to understand the value inside a mutable variable, you must read its entire scope. In functional code, a value’s expression is all you need to understand it. You can navigate to its building blocks’ definitions with ease thanks to LSP too, though I wish that the Haskell language server had a way to also navigate to libraries’ definitions as well.

I also use vim and I’ll try haskdogs, it seems like exactly what I need, thanks!

Perhaps the more modern question is not “imperative versus functional”, but formal syntactic code languages (imperative, functional, logical, …) versus machine generated computations (ML, GPT, etc). If one can give plain English descriptions, some AWS keys, and get back “a system”, the languages and transfer formats in-between may lose relevance. That’s if you take the MIT view, anyway. (See Sussman on why they switched from Lisp to Python.)

If, indeed.

As noted memorably by Dijkstra, plain human languages would only suffice for “trivial” usage (depending on how “trivial” is defined). For more rigour, a more precise and therefore restricted language is needed.

3 Likes

I can only hope that an A(G)I smart enough to write complete software from plain English descriptions will understand that it needs to explain its work and thus present the “system” using an easy to understand (e.g. functional/denotative) formalism.

The MIT view is very disappointing. They observed a problem: nobody understands software any more. And then they just decided to give up and stop teaching how to understand software. That seems like the opposite of what society needs.

2 Likes

I agree. I don’t know the MIT view but it would be silly to stop developping formally precise languages just because a few programming tasks can be described using ML. That would be as silly as to stop developping mathematics and proof-theory just because sometimes we can get by using only statistics.

According to @chrisdone, the replacing of Lisp with Python at MIT seems to be related to Java replacing Haskell at the University of Texas. So despite it usually being the more obnoxious of the two, the superficial simplicity of imperative code (and programming) continues to beguile.

…and seems destined to remain a novelty as more and more imperative “dance halls” appear, despite the goad of parallelism.

I don’t think it is related in this way. See the explanation from Sussman in the video that Chris linked above. Sussman is essentially saying that it became impossible to fully understand software systems as the software (and hardware) grew exponentially from 1980-1997 and so they quit. They transitioned to a “poke at it until it seems to do what you want” approach. The reason Python was chosen is simply because it had the largest robotics libraries at the time.

the last decades have shown in the Western world a sharp decline of people’s mastery of their own language …

Yeah, miserable old s.o.b.s have been complaining about the yoof since – oh – Socrates’ time.

We still have to get over the bump from an enterprise’s own description of what it wants (which might use some formal tools/notations, but won’t be in a programming language) to a ‘proof’ (I’m joking) that this particular suite of programs delivers to that description.

1 Like

That seems to be saying the reason is libraries: volume of, complexity of. Sussman doesn’t mention models of computation.

Modern programming is not needing to understand at the level of data-items-through-processor (he’s saying); but rather what does this library do? (because it’s so humungous the docos are never adequate); what do these other libraries do? How do I plug the output from one into the input to another to get the solution? (I get the analogy of a 1,000-pin chip vs. individual capacitors and diodes, or even 12-pin ICs.)

So …? if LISP had plenty of libraries, MIT wouldn’t have switched? If Haskell – as a ‘glue’ language – could interface smoothly to Python libraries, he’d use Haskell? I fear Haskell’s laziness would remain as an insurmountable impedance mismatch.

…do try to remember that sentiment when you’re old! As for me…well, I prefer this depiction of certain CS luminaries:


…possibly, yes: it has been used to build entire operating systems.

…before giving up, perhaps we should see how another declarative language works with laziness:

Lazy Stream Programming in Prolog (2019).

I think the question shouldn’t be, “why is imperative code is worse than functional code”, but rather, “Is imperative code always worse than functional code, and if so, when is it better?”

That was essentially the point being made by Jose Valim / (maybe SPJ?) in the other thread.


As to whether imperative code is always worse; well, let’s say, we want a list of instructions; i.e, “wash the dishes, clean the sink, take out the trash”. The natural form of this list of instructions is, well, a list of sequential instructions. Should we specify instead “the house after the trash has been taken out, the sink has been cleaned, and the dishes have been washed” instead?

That is to say, there are some applications wherein by definition, imperative programming is better than functional programming because the problem is fundamentally imperative.


The other issue is that imperative programming is sort of played out, whereas functional programming still has a lot of life left in it for development and exploration. In my view, the Jose Valim question is basically just asking for better combinators (unconcat for lists, zipover, although most of these needs are already met by lens).

In the event that a good or better functional programming technique does yet not exist, imperative programming is better because you do not have to develop the alternative, and propagate the idiom to make it understandable.

Say, for instance, you’re working in a language without map or filter, and it’s not idiomatic to use such. In this case, you’d have to implement the map and filter functions yourself before use, which is unergonomic, and can help make your code less readable due to outside unfamiliarity with your idiom.


On the other hand, this does not mean that we should settle for C or Fortran in every language. Haskell, after all, is a research language.

We should be appreciating and looking forward to problems like the ones Jose Valim proposed, because it’s through problems that we find and push the limits of functional programming.

2 Likes
  • Say, for instance, you’re working in a language without map or filter, and it’s not idiomatic to use such. […]

    Hrm. Say you’re working in a language without convenient access to mutation e.g. Haskell, and it’s not idiomatic to use such a feature in said language…

    • (removeTrash . cleanSink . washDishes) :: House -> House

    • (removeTrash' <=< cleanSink' <=< washDishes') :: House' -> ST s House'

  • […] there are some applications [for] imperative programming because the problem is fundamentally imperative.

    • In the event that a good or better functional programming technique does yet not exist […]

    …and that’s where you would first try using encapsulated state (ST, runST and co.)

  • In my view, the Jose Valim question is basically just asking for better combinators […]

    “How many combinators does it take to screw in a light bulb?”

  • […] it’s through problems that we find and push the limits of functional programming.

    …you mean like the lack of adoption of nonstrict evaluation in newer languages such as Elixir?

1 Like
  • (removeTrash . cleanSink . washDishes) :: House -> House
  • (removeTrash' <=< cleanSink' <=< washDishes') :: House -> ST s House

Which poses a worse problem, no? This is a chain of function applications, and the first problem is that it’s (.), not (>>>). The bigger problem is that you can think of function applications imperatively, i.e, you are working in a pipeline, taking the action of transforming values from a → b, and so on. Church-Turing, no?

Given that ST / runST perform worse than State.Strict.State with modify’ in many cases, it might not be a good idea. Accumulating parameters and Strict State give enough access to essentially imperative thinking.

There’s also the question of, well, since statements in Haskell (check Haskell Report 2010) are values in do notation, whether it’s more convenient and flexible to have access to:

[ washDishes
, cleanSink
, removeTrash
]

wherein it’s possible to sequence_ the operations together, as well as reverse, foldr, and so on on the [HouseAction]. A bit Lispy I guess.


My own stance is, well, generally, the four cases where essentially imperative code are acceptable are:

  • You are throwing effects, and Conal Elliott’s ask for a better alternative to IO and monads hasn’t been fully fulfilled, despite monad transformers, free monad interpreters, and effect libraries.
  • You need more precise control over performance, or just better performance.
  • You’re limiting your idiom to make the codebase easier to onboard onto.
  • You don’t have a better functional solution available.

My real opposition here is against the desire to “declare victory” and be done with it. The last I checked, no one had managed to get a zip solution that was more succinct than the Python, and I suspect this can’t be done without lenses (after which it’s roughly a 2-3 liner; the real issue with this problem for Haskell I think is that Haskell runs on ADTs, not objects, and you either cheat with the datatype, sacrificing fidelity, or resort to lenses, which exist to make this problem go away, but do so at the cost of readability).

2 Likes

No.

If you allow mutation in a programming language, you are immediately confronted with a choice: sequencing or nondeterminism. Furthermore if you allow mutation to be accessed from (potentially) everywhere in that language, then the choice of sequencing or nondeterminism will also apply everywhere.

My first example - (removeTrash . cleanSink . washDishes) is only sequential: no mutation involved.

As for the second example, let’s assume the worst: House House' is STRef-based and therefore relies on mutation. Now for that choice - we don’t want nondeterminism so we must choose sequencing, which the monadic ST type provides. But fortunately for us, there’s also runST, which allows us to keep the use of mutation (and the need for sequencing) private: nice!


…and given it’s similarity to (strict) ST in GHC, is IO also affected?


Considering that Haskell continues to require mutation (and therefore sequencing) for I/O, after the better longer part of three decades, I suspect that such a declaration will not be happening any time soon. So our functional programs will continue to be boats in an imperative sea, and we will have to keep watch for, and reseal leaking hulls:

http://discourse.haskell.org/t/an-epic-future-for-spj/3573/9

Implicitly, RW, no? Remove trash, clean sink, wash dishes, these are actions outputting to a global state. So there is an implicit form of mutation occurring because we are jumping from the semantics of a functional programming language to a real world; i.e, in Haskell, House would be a wrapper, effect system, or monad transformer stack over IO.

So perhaps we’re misunderstanding each other.


As to IO vs ST, I’ve seen cases wherein IORefs seem to optimize better than pure code (IORef is used as a data carrier here for unsafePerformIO).


As to declaring victory, accumulating parameter might be “simulating” state or having state be handled by the compiler, but it’s still effectively stateful in that state is being stored in a function argument. That’s why I kept on going back to the Jose Valim problem; I kept on trying to get a Lens-based solution out of it that would avoid the need for quasi- or crypto-imperative State Monad.

1 Like