What is a good formal definition of purity?

I always though it was “pure” which was easy to define, and “referential transparency” which was hard :slight_smile: (see reddy’s two answers here)

That said, one difference I would imagine is that purity is a property of functions (which extends to.a language) while referential transparency (in the sense that say, reddy argues is too naive) is a property of expressions in context (which extends to a language). I would also say that purity is a semantic property, while RT is at least in part a syntactic property – i.e. I can imagine a pure functional language which violates RT simply through syntactic tomfoolery, and I can also imagine an impure functional language which respects RT. On the last point, take a language without lambdas, but only named definitions, and further, with named definitions only of functions, and print statements. I believe that one can always substitute names for their referents without changing the underlying semantics of sequences of expressions.

[edit: i changed that last section a bit, and its probably still not exactly correct, but I imagine there’s something that could work along those lines…]

4 Likes

I was hoping no one would notice this terminological issue :slight_smile: but it is fair to raise it, and probably helpful for the discussion to make it precise! What I mean by “referential transparancy” isn’t actually that notion in Quine’s original sense, but something related, roughly “the soundness of the β axiom” i.e. “β reduction is semantics-preserving”, i.e. (\x -> e1) e2 is semantically equivalent to e1[x -> e2]. I don’t really have the technical knowledge to make this completely precise, but hopefully it communicates my meaning*.

Now, I suspect that if the β axiom is sound (for a particular language and semantics) then all term contexts (in the language) are referentially transparent (for the particular semantics) in Quine’s original sense.

[*] and we probably need a whole other bunch of convenient things to go along with it, such as the equivalence of let x = rhs in e to (\x -> e) rhs.

1 Like

“Set-theoretic function” is (likely) not a part of that imaginary programming language (semantics); at least, it’s not a part of most languages I know of. Hence, saying that something is “equivalent” to that “set-theoretic function” (whatever that is) following that semantics doesn’t give you a well-defined notion.

Funny enough, one of the more precise accounts of purity (to my knowledge) talks about this approach as one of the widely used ways to capture purity, and one that doesn’t really stand. (Cf. right in the abstract: “We begin by showing that some proposed definitions which rely on confluence, soundness of the beta axiom, preservation of pure observational equivalences and independence of the order of evaluation, do not withstand close scrutiny.”)

“Set-theoretic function” is (likely) not a part of that imaginary programming language (semantics); at least, it’s not a part of most languages I know of.

You chopped out the following sentence: “I.e., which is equivalent to a mapping from input values to output values.”

One could of course make that more precise, but such a thing should be straightforwardly definable in most classes of semantic models we give to programming languages. Even in e.g. big step models they could be extended to have an explict notion of a function-as-lookup-table with a corresponding reduction.

2 Likes

This is no better because it’s not clear what you mean by equivalent. And again, equivalence is a relation, which implies you defined a set of objects you’re going to talk about. That set of objects may be: 1) set of programs (syntax); 2) set of programs modulo e.g. beta-equivalence (a semantics), 3) set of “mathematical” functions. But so far you haven’t defined such a set. I don’t see a way to define a relation on e.g. union of 2) and 3) – this just doesn’t make sense.

To be more concrete. Consider the C programming language. Is this function “equivalent to a mapping from input values to output values”

inf f(int x) { printf("42"); return x+1;  }

? If yes, by your definition, it is pure, but I doubt anyone will call it so. If no, then could you explain how it is not “equivalent to a mapping…”? (Even though, this notion doesn’t make sense to me, as explained above, maybe after your explanation I finally get it!)

1 Like

To be more concrete. Consider the C programming language. Is this function “equivalent to a mapping from input values to output values”

I don’t seem to be able to explain this clearly enough in this forum/format, or without a degree of formal setup that would be way too tedious both to write and read. But the question you are posing is not exactly well formed regarding what I am suggesting. I am not suggesting that a function in the language be a “set theoretic function” in the language. I am suggesting that its semantic interpretation (which presupposes a fixed choice of a semantics) be equivalent by the rules of equivalence of the target semantics in the given semantic interpretation to a function in the domain of interpretation which is in some sense “a set theoretic function”, which, typically would arise because e.g. the categorical semantics would take place in a setting in which something like FinSet embeds in a natural way.

So to answer the intended meaning of your question fully, one would need to specify an intepretation function for the C language into some semantic domain where there’s a given notion of equivalence and an obvious embedding of FinSet. One would then translate the given function over that mapping and ask if it lands in the image of the embedding of FinSet. There might be something in Strachey’s papers or the like that does some of this setup, but I don’t care to try to dig through them at the moment.

Edit: I suppose one could try to “internalize” this in some way, which might have been how my original suggestion read. In such a case one would simply define a notion of “set theoretic function” internal to C, which would be somewhat ad-hoc, but attempt to capture universally the class of things that the more extended version I suggested does. So for the C language that could be “functions that can be written solely with the case and return constructs”. Since your example function cannot be written solely with case and return, it would not fall in the given class.

1 Like

Aha, yet more evidence I shouldn’t leave out technical details when discussing with Haskellers :slight_smile:

Yes, in fact Sabry’s paper is the inspiration for my belief that referential transparency[*] is a more formal description of what we mean when we talk of “purity” being a quality we value in Haskell. If you look at Sabry’s Definition 4.7 (Purely Functional Language) and translate it into Haskell it becomes something like

  • soundness of β, and
  • equivalence of f x and f $! x when the latter is defined, and
  • equivalence of (\x -> e) rhs and let x = rhs in e

Indeed Sabry gives an example to show that the soundness of the β axiom alone does not stand. The equivalence of f x and f $! x needs to be added.

But this is all now very far off @Liamzy’s original intent I think. I regret, yet again, that Discourse doesn’t have proper threading.

[*] in its new meaning, not Quine’s original meaning

…from Sabry’s paper?

Thanks @atravers, I amended my post to add the necessary side condition.

in its new meaning, not Quine’s original meaning

…from Sabry’s paper?

No, Sabry’s paper doesn’t discuss referential transparency. I mean the “new meaning” that has been adopted by Haskellers, which seems to me to be equivalent to the soundness of β.

I’ve split out the “formal definition of purity” discussion. I hope everyone agrees this could use its own thread.

I do agree, but even better would be if Discourse could be persuaded to support a “normal” version of threading, like, for example, Reddit.

What is a good formal definition of purity?

This looks relevant:

…so one possible definition of “purity” could just be the dual absence of intrinsic:

  • destructive assignment.

  • statement sequencing.

Where exactly Haskell now [2022 Jul] resides is left as a Ph.D. for the extremely keen…


…why the extremely keen? Because a large (and very!) cross-section of the Haskell community (and perhaps the wider FP community) will disagree with the proclamation, whatever it is. But at least no-one can make counter-claims of “bias”.

Contrast that with arriving at “a good formal definition” here, on a Haskell community thread:

  • if that definition means Haskell is still pure, there will be claims of “personal bias”, “parents always have beautiful children”, and the rest…

  • if that definition means Haskell is now impure, there will be claims about “programmers trying to be computologists computer scientists”, amongst many others…

Haven’t we already got enough matters to deal with as a community?

Page 2 gives the high level answer:

A language is purely functional if (i) it includes every simply typed λ-calculus term, and
(ii) its call-by-name, call-by-need, and call-by-value implementations are equivalent (modulo
divergence and errors).

2 Likes

It’s only 2022, the technology just isn’t there yet.

More seriously: It’s a sad state of affairs that Discourse doesn’t have a reasonable implementation of threading :frowning:

I suspect it’s a deliberate choice on their part in the name of “simplicity”, but not a choice that I agree with, personally.

tomjaguarpaw, BardurArantsson:

…if you really think anything more needs to be added there.

I like this answer. In particular, it rules out non-terminating programs, which also is a side-effect imo.

Perhaps building on that intuition, a side-effect is any information you get while running a program. A pure program indeed always terminates and there isn’t anything you get to know by running it. Untyped LC has non-termination, so by successfully evaluating an expression you get to know something.

Now, the notion of side-effect you want to observe also depends on how you interact with/instrument your evaluator. Take local state as an example: I consider a function that uses interior mutability as in rust pure as long as that mutability doesn’t leak outside.

You can model such “side effects” entirely as a desugaring to the pure calculus. Your language’s machine implementation might do something smarter. It doesn’t matter, as long as the external observer can’t/decides not to see the difference. By contrast, if the machine has to ask the observer for a new ref cell, for example, that’s when the program is no longer free of side-effects.

I think this is relevant: On the Expressive Power of Programming Languages by Shriram Krishnamurthi [PWLConf 2019] - YouTube

If you can talk about expressive power, you can start defining what you mean by pure. What power must a language have or lack to be pure?

It matches my intuitions better than other formulations. (Thinking about “side effects” is a red herring.)

That’s interesting: I was seeing if there were any slides for Krishnamurthi’s presentation when this appeared:

…is there a connection?