Let's agree to be different. On empirical and deductive nature of coding

I consider myself also to be a little of both (typically, not at the same time though).
It is on oversimplified stereotypical view and I have tired to acknowledge this clearly in the post.

I have worked with and met many people who have a strong polarization one way or the other.

The sentence you have quoted, for some people the first part is a bikeshed, for many anything in the second part of the sentence is a no-go. Are there people who are all into engineering processes and love the theoretical? Sure there are a few of those too.

Quoting myself:

I need to emphasize that what I am presenting is an oversimplification. People are complex and cannot be easily labeled.

I am simplifying all of this and consider programmers to be either pragmatists or theorists and take an oversimplified (binary) and a somewhat stereotypical view of what these terms mean.

I wrote this that way because it explains a lot of things I cannot explain otherwise. E.g. certain decisions made by mainstream PLs, certain things I see a lot in Haskell programs.

I want to comment on your statement:

All of the above, please. In fact, there aren’t even any trade-offs to consider here, there is no tension whatsoever between using dependent types and having a clean git history.

I love that you think this way. There is a great potential synergy between engineering and theory.
These things do not need to juxtaposed even if people who know these things well are somewhat divided by interests and backgrounds.

It’s a common thought experiment to think of a person embodying an extreme of a trait in order to think more clearly about the trait and explore the spectrum. The prototype person of that trait is then promoted to a group in order to think about resulting group dynamics of said trait.

If we don’t do this, we produce no categories at all.


And despite individuals being unique, group dynamics follow only a handful of observable effects.

3 Likes

Scientists use these 6 (or more) terms: inductive, empirical, a posteriori, deductive, rational, and a priori.

For the sake of completeness, abductive reasoning should also be considered. Especially for being a good problem solver, or during program debugging, it is important to use abductive intuition to find the most plausible causes to save your time (and be seen as an efficient problem solver over time).

3 Likes

There is one notable exception where piling up experiments does not result in extraneous complexity. It is called type safety.

This is not true. It’s very easy to create a difficult and complex type safety design that hinders software development. Type safety make bad ideas impossible and good ideas harder - the fancier the types, the harder to use, but the more good ideas can be safely expressed at all.

The part of FP that has been the most disappointing for me is a typically low quality of error outputs. This may have to do with all falsehoods being equivalent in mathematics23. However, we should not criticize the theory, rather the theorists for selecting abstractions that suppress or confuse error information.

I think this is completely off base. A ton of thought has been put into Haskell errors and exceptions. My opinion is that it’s simply very hard to produce good error messages when the state space of allowable programs and complexity of inference grow. Bidirectional type inference is an example: you really don’t know what is expected, just that two things don’t match.

The win here is empirical heuristics- when people make a mistake, what are they usually trying to do?

IMO it is the theorist aspect that promotes domain modeling sufficiently well to model error states completely.

Carol: “Mathematics keeps improving and changing I am sure, everything does”

Alice: “No, it only grows, it has not changed its mind in over 100 years”

Lol what? Mathematicians are fallible and often find faults in decades old proofs.

I think there’s a truth being approached here, but I don’t think this post has identified it. The arguments just don’t hold up.

2 Likes

As a side note, I do enjoy Overcoming Software and your presentations (including and maybe especially the stuff about error handling).
Thank you for sharing your knowledge!

I think I must have rubbed you the wrong way for some reason. Here are my short answers.

Type Safety: I have been trying to think deeply about what could have prevented any of the escaped bugs I encounter.
In my accounting, way over half could been prevented by stronger types. Sure, anything can be overdone, e.g. I can add safety in places which are not error prone just adding cost to future refactoring. Anything in programming can be misused.

About the quality of error outputs: I wrote about it in my other posts complaining about Maybe overuse where error information exists and is suppressed, about alternative use that will throw troubleshooters for a loop. E.g. Alternative is implemented in IO by swallowing error information:

mplusIO :: IO a -> IO a -> IO a
mplusIO m n = m `catchException` \ (_ :: IOError) -> n

the end result is code where error output is something that is hard to trust. I am not saying people are forced to overuse these concepts, I am saying that I am observing the overuse in the code I work on.
I am not claiming that good error outputs are trivial, I am saying that in my experience the error output quality is lower in FP programs. That judgement may not resonate with some readers, e.g. someone could compare Haskell to an (often not existent) error handling in JS.

About mathematicians and errors. Let me quote part of what I wrote:

“This is kinda fun to think about: we have only empirical evidence of mathematics itself being correct, we know we will never prove it formally. Mistakes in mathematics are extremely rare. However, we have a lot of empirical evidence of (past) incorrectness in various empirical sciences. I am a pragmatist enough to say that mathematics is correct, the rest looks good only until we learn more about it (just like bugs in software)”

We are talking about empirical observability here. I have not experienced many invalid proofs in my career. Granted, I did not stay in that field for long only a few years. I was an editor for a mathematical publication as part of my duties for maybe 2 years and have reviewed a few papers before they had been published. I have obviously read many and wrote quite a few and my experience matches Alice’s.

Can you provide examples? Oh yes, you probably will give me some well known case like the original Wiles’ proof of Fermat’s theorem. Do you know about any statistics which shows some significant, say 0.1% error rate in math proofs? I do not. I do know about some quite high statistics of, e.g. misdiagnosis in medical fields, error rates in software, etc.

A cool example is to compare Turing’ and Church’ work on undecidabilty, Turing machines had initial bugs, LC paper (as far as I know) did not. But this is obviously only one example.

Repeating myself, the argument is observational, I put incorrect proofs under “exceptions that prove the rule” of these being very rare. They are rare and thus we talk about them when they happen. To me formal is the essence of correctness.

The bits you pointed out are a small sample of topics considered in my article.
My main goal was to talk about different mindsets in the context of improving communication and understanding these mindsets. To do that I had to exaggerate a little and present these in a binary, stereotypical way.

I was sorry to learn that you found nothing positive to say about my article.

My experience aligns with this.

With an odd exception: FP code written by amateurs. IO code full of debug statements and simplistic error handling. Things that go wrong usually crash the program. There is no design whatsoever. The only way to survive such a codebase is having debug statements everywhere and trusting good low-level exceptions. And it works.

I’m not sure if it’s a ton.

Getting Prelude.head errors with zero debug trace doesn’t strike me as very well thought through.

I agree. I wrote an article on the subject a while back.

1 Like

there even could be “Ockham razor” mindset, I met people who were very much into this line of thought. I am completely not exploring this and think it is orthogonal to the topics discussed.

I am more a “Hiccum’s Dictum” believer: if things go wrong, everything falls apart :).

Some people consider simulation to be a distinct from both deductive and empirical,
and consider it yet another process for gaining understanding.

Valid observation though. Thanks!

I added a note to unexplored section as this seems a very interesting topic in the context of programming.

1 Like

Simple solution: modify Prelude.head to send the program into an infinite regression after the message is displayed: a debugger can then be attached to the still-running program to examine its call stack.

fwiw i found it really hard to look past this comment ( i’m not sure if it’s intended, but it also comes across quite arrogantly. ):

Formal reasoning3 is
the only approach humans have figured out to solve complex problems correctly[4]
(Infrequent, Pragmatic, Lambda Blog - Let's agree to be different. On empirical and deductive nature of coding.) on the first go (without
trial and error associated with empirical reasoning).

why do you think this? to me, someone who has done formal reasoning ( but by no means an expert ), it feels completely wrong. formalising things has nothing to do with “solving” “problems” “on the first go”; it only has to do with making it more precise what you’re doing. i.e. it certainly doesn’t seem to reduce the trial and error; it’s just that the trial and error happens in a different place.

i.e. any program that runs solves the trivial it’s own construction problem; i.e. formal methods does nothing but help you link the theory and the execution.

for a broader point, i often find distinguishing points like this are bound to be immediately wrong, because all programming is essentially equivalent.

1 Like

I think the point is that formal methods allow you to know when your work is finished. When you’ve completed the proof and your tools accept it you know you are done, while usually trial and error continues until the programmer personally feels satisfied with the results. Of course often even with formal methods you still have to go through a process of trial and error with the requirements/specification.

Also, I’d separate two kinds of formal methods: intrinsic and extrinsic. Extrinsic formal methods use external proof systems, like writing a program in C and formalizing it in Coq or creating a model and using a model checker to proof properties. Intrinsic formal methods can be practiced in languages like Agda and to some extent in Haskell with GADTs and theorems for free, where you can make your properties part of the data structures you manipulate (e.g. instrinsically typed abstract syntax trees). Any program on these data structures is automatically guaranteed to satisfy its properties.

Intrinsic formal methods really do give you the feeling that your programs are correctly implemented in one go.

Are you saying that in your experience, all programmers everywhere approach programming essentially the same way? That would certainly be different from many of my experiences…

Are you saying that in your experience, all programmers everywhere approach programming
essentially the same way? That would certainly be different from many of my experiences…

Nice question; what I mean to say is that anything that tries to distinguish one kind of programming as fundamentally different from any other is bound to be just a preference in expression/working style.

1 Like

I do not want to dismiss or avoid discussing this in depth but I really think that persevering (you mentioned that seeing my side note about formal reasoning discouraged you from continuing on) to side note with exercises in Emperical FP and thinking about these exercises is better than trying to argue this out.
One exercise is to implement the same function in your mainstream language of choice to violate a property that is impossible to violate in Haskell. Also, you will see implementation that is fully made out of equations you can directly use in equational reasoning. How would you “extrinsically” convert TS implementation that uses lots of Array.push() to coq? You end up counting pushes on each branch of conditional logic… You code is a collection of idioms that are hard to reason about. Formal reasoning about a lot of code is simply not pragmatic. It is like testing: good code is testable (and not all code is very testable). Here good code is provable (and not all code is very provable)

Thinking about what can go wrong with that code and doing these exercises could explain better why formal is “correct on first go” than I ever would be able to.

About the formal reasoning “solving complex problems correctly on first go” footnote 4 explains why I think so.
Basically mistakes in mathematics are extremely rare, empirical evidence suggests that formal is mostly correct of first go.

@rpeszek

respectfully, i think i’ll agree to disagree. my interpretation of the path you’re describing is exactly the kind of haskell elitism that i find really difficult about this community.

Please note that the point of the article is not to agree but to try to understand what the positions are.
If people agreed with what I wrote, this article (including its title) would not make much sense.

Also, it is productive to be able to disagree on some topics and agree on others.

I will make changes to my post based on this feedback.

I have set out to write about different mindsets, a valid criticism of that would be:
“The position you are describing is held by nobody”, rather than “I do not agree with that position”.

I will make it more clear that my goal is to catalog and understand different positions and not to seek consensus about which is the right one. This includes my side-notes about formalism and mathematics. I will add “IMO” to some side-notes.

I will make a note that there appears to be no consensus that formalism or mathematics are much less error prone than the empirical method. I remain surprised that there is no consensus on this, but my goal is not to argue validity of anyone’s viewpoint.

Thank you for your time and feedback.

I will make changes to my post based on this feedback.

I have set out to write about different mindsets, a valid criticism of that would be:
“The position you are describing is held by nobody”, rather than “I do not agree with that position”.

I will make it more clear that my goal is to catalog and understand different positions and not to seek consensus about which is the right one. This includes my side-notes about formalism and mathematics. I will add “IMO” to some side-notes.

I will make a note that there appears to be no consensus that formalism or mathematics are much less error prone than the empirical method. I remain surprised that there is no consensus on this, but my goal is not to argue validity,

Thank you for your time and feedback.